Jan 21 17:16:38 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 17:16:38 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:38 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 17:16:39 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 17:16:39 crc kubenswrapper[4823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 17:16:39 crc kubenswrapper[4823]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 17:16:39 crc kubenswrapper[4823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 17:16:39 crc kubenswrapper[4823]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 17:16:39 crc kubenswrapper[4823]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 17:16:39 crc kubenswrapper[4823]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.180042 4823 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182879 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182903 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182909 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182914 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182919 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182925 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182930 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182936 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182942 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182948 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182953 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182957 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182962 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182966 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182970 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182974 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182978 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182981 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182985 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182991 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182995 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.182999 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183003 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183007 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183010 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183014 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183018 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183022 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183025 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183029 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183032 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183036 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183040 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183044 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183047 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183051 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183054 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183058 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183062 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183065 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183069 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183072 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183078 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183081 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183085 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183088 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183092 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183096 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183099 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183103 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183108 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183112 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183116 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183120 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183124 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183128 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183132 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183136 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183141 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183145 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183151 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183157 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183162 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183167 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183171 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183177 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183182 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183188 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183194 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183198 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.183202 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183445 4823 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183458 4823 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183470 4823 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183476 4823 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183483 4823 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183487 4823 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183495 4823 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183500 4823 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183505 4823 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183510 4823 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183515 4823 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183519 4823 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183523 4823 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183528 4823 flags.go:64] FLAG: --cgroup-root="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183532 4823 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183536 4823 flags.go:64] FLAG: --client-ca-file="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183540 4823 flags.go:64] FLAG: --cloud-config="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183544 4823 flags.go:64] FLAG: --cloud-provider="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183548 4823 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183554 4823 flags.go:64] FLAG: --cluster-domain="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183558 4823 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183562 4823 flags.go:64] FLAG: --config-dir="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183566 4823 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183571 4823 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183577 4823 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183582 4823 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183587 4823 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183593 4823 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183598 4823 flags.go:64] FLAG: --contention-profiling="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183603 4823 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183609 4823 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183614 4823 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183619 4823 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183627 4823 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183632 4823 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183639 4823 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183645 4823 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183651 4823 flags.go:64] FLAG: --enable-server="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183656 4823 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183664 4823 flags.go:64] FLAG: --event-burst="100" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183669 4823 flags.go:64] FLAG: --event-qps="50" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183675 4823 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183680 4823 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183686 4823 flags.go:64] FLAG: --eviction-hard="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183693 4823 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183698 4823 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183703 4823 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183708 4823 flags.go:64] FLAG: --eviction-soft="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183716 4823 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183722 4823 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183727 4823 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183733 4823 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183738 4823 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183744 4823 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183749 4823 flags.go:64] FLAG: --feature-gates="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183757 4823 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183762 4823 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183768 4823 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183773 4823 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183779 4823 flags.go:64] FLAG: --healthz-port="10248" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183784 4823 flags.go:64] FLAG: --help="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183789 4823 flags.go:64] FLAG: --hostname-override="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183794 4823 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183799 4823 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183806 4823 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183812 4823 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183817 4823 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183831 4823 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183836 4823 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183841 4823 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183847 4823 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183875 4823 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183881 4823 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183886 4823 flags.go:64] FLAG: --kube-reserved="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183891 4823 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183895 4823 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183899 4823 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183903 4823 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183908 4823 flags.go:64] FLAG: --lock-file="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183912 4823 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183917 4823 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183922 4823 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183929 4823 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183934 4823 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183938 4823 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183943 4823 flags.go:64] FLAG: --logging-format="text" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183947 4823 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183952 4823 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183956 4823 flags.go:64] FLAG: --manifest-url="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183960 4823 flags.go:64] FLAG: --manifest-url-header="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183967 4823 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183973 4823 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183979 4823 flags.go:64] FLAG: --max-pods="110" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183983 4823 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183988 4823 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183993 4823 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.183997 4823 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184002 4823 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184007 4823 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184013 4823 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184025 4823 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184030 4823 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184035 4823 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184040 4823 flags.go:64] FLAG: --pod-cidr="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184044 4823 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184051 4823 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184055 4823 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184060 4823 flags.go:64] FLAG: --pods-per-core="0" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184064 4823 flags.go:64] FLAG: --port="10250" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184069 4823 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184073 4823 flags.go:64] FLAG: --provider-id="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184077 4823 flags.go:64] FLAG: --qos-reserved="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184082 4823 flags.go:64] FLAG: --read-only-port="10255" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184088 4823 flags.go:64] FLAG: --register-node="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184092 4823 flags.go:64] FLAG: --register-schedulable="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184097 4823 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184105 4823 flags.go:64] FLAG: --registry-burst="10" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184110 4823 flags.go:64] FLAG: --registry-qps="5" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184114 4823 flags.go:64] FLAG: --reserved-cpus="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184118 4823 flags.go:64] FLAG: --reserved-memory="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184124 4823 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184128 4823 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184133 4823 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184137 4823 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184141 4823 flags.go:64] FLAG: --runonce="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184145 4823 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184149 4823 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184154 4823 flags.go:64] FLAG: --seccomp-default="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184159 4823 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184163 4823 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184167 4823 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184172 4823 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184178 4823 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184183 4823 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184187 4823 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184191 4823 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184195 4823 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184200 4823 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184204 4823 flags.go:64] FLAG: --system-cgroups="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184208 4823 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184216 4823 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184221 4823 flags.go:64] FLAG: --tls-cert-file="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184225 4823 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184232 4823 flags.go:64] FLAG: --tls-min-version="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184236 4823 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184241 4823 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184246 4823 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184250 4823 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184255 4823 flags.go:64] FLAG: --v="2" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184261 4823 flags.go:64] FLAG: --version="false" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184267 4823 flags.go:64] FLAG: --vmodule="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184272 4823 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184277 4823 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184394 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184400 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184406 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184410 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184414 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184419 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184423 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184427 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184430 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184434 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184438 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184444 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184447 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184452 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184456 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184460 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184464 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184468 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184472 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184476 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184480 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184484 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184487 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184493 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184497 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184500 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184504 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184508 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184512 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184515 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184519 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184523 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184527 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184530 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184534 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184538 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184542 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184546 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184549 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184553 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184556 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184560 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184563 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184568 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184572 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184575 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184579 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184583 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184587 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184590 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184594 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184597 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184601 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184605 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184608 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184612 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184616 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184619 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184623 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184626 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184629 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184633 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184637 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184640 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184643 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184647 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184650 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184655 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184659 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184662 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.184667 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.184875 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.195907 4823 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.195956 4823 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196052 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196063 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196068 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196075 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196080 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196085 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196090 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196095 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196099 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196104 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196108 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196112 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196118 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196123 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196127 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196132 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196136 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196142 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196152 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196159 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196164 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196169 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196173 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196177 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196182 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196186 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196191 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196196 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196201 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196205 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196209 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196215 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196220 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196224 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196230 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196234 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196240 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196246 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196251 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196256 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196260 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196265 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196269 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196274 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196279 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196283 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196288 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196292 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196296 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196301 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196305 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196310 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196314 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196318 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196323 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196328 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196332 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196336 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196341 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196346 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196350 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196355 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196359 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196364 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196368 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196372 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196376 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196381 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196385 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196390 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196397 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.196406 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196536 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196545 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196550 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196555 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196559 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196564 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196568 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196573 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196578 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196582 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196586 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196591 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196595 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196599 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196603 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196608 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196612 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196616 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196624 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196629 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196633 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196638 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196642 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196646 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196652 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196656 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196660 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196665 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196669 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196675 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196683 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196689 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196694 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196699 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196704 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196708 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196713 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196717 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196721 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196725 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196730 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196734 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196738 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196744 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196750 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196755 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196760 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196765 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196769 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196774 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196779 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196784 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196788 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196792 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196796 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196801 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196807 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196812 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196816 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196820 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196824 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196830 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196835 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196840 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196845 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196850 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196874 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196878 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196883 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196888 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.196893 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.196901 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.197521 4823 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.200729 4823 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.200867 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.202368 4823 server.go:997] "Starting client certificate rotation" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.202426 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.203484 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 05:03:49.957039532 +0000 UTC Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.204480 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.210052 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.211452 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.211926 4823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.226810 4823 log.go:25] "Validated CRI v1 runtime API" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.251135 4823 log.go:25] "Validated CRI v1 image API" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.253357 4823 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.256494 4823 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-17-11-41-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.256533 4823 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.270740 4823 manager.go:217] Machine: {Timestamp:2026-01-21 17:16:39.269470745 +0000 UTC m=+0.195601625 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b2b8fe66-0f89-498e-96c2-0d424acf77a6 BootID:91025cce-7a80-4c4c-9dde-4315071ed327 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:11:34:b4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:11:34:b4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:48:f6:23 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:79:78:a1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ea:c7:0e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ec:76:d7 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:8e:a1:15:36:b8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4a:96:c5:63:e2:5a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.271024 4823 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.271194 4823 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.271490 4823 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.271664 4823 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.271700 4823 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.272055 4823 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.272069 4823 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.272263 4823 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.272298 4823 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.272660 4823 state_mem.go:36] "Initialized new in-memory state store" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.272747 4823 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.273480 4823 kubelet.go:418] "Attempting to sync node with API server" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.273508 4823 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.273532 4823 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.273545 4823 kubelet.go:324] "Adding apiserver pod source" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.273560 4823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.276433 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.276578 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.276969 4823 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.277586 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.278011 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.278986 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.279721 4823 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280425 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280451 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280461 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280469 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280483 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280492 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280500 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280512 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280523 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280532 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280545 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280553 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.280731 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.281294 4823 server.go:1280] "Started kubelet" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.281809 4823 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.281887 4823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.281978 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.282681 4823 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.283746 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.283795 4823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.284108 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:37:47.78680412 +0000 UTC Jan 21 17:16:39 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.284581 4823 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.284608 4823 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.284650 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.284596 4823 server.go:460] "Adding debug handlers to kubelet server" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.286134 4823 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.286386 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287067 4823 factory.go:153] Registering CRI-O factory Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287108 4823 factory.go:221] Registration of the crio container factory successfully Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287267 4823 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287288 4823 factory.go:55] Registering systemd factory Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287300 4823 factory.go:221] Registration of the systemd container factory successfully Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287334 4823 factory.go:103] Registering Raw factory Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.287354 4823 manager.go:1196] Started watching for new ooms in manager Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.287392 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.287462 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.288247 4823 manager.go:319] Starting recovery of all containers Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.293717 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cce7f2e3688ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 17:16:39.281256654 +0000 UTC m=+0.207387514,LastTimestamp:2026-01-21 17:16:39.281256654 +0000 UTC m=+0.207387514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302558 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302759 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302803 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302837 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302871 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302887 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302902 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302918 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302940 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302956 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.302972 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303013 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303047 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303064 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303084 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303119 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303151 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303164 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303195 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303227 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303275 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303329 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303364 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303410 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303426 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303459 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303737 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303760 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303782 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303801 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303817 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303924 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303962 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.303992 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304067 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304105 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304119 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304132 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304146 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304179 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304207 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304226 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304276 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304307 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304377 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304398 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304415 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304434 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304468 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304506 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304543 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304609 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304701 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304761 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304799 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304834 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304902 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304940 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304975 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.304993 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305112 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305174 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305197 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305282 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305315 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305335 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305387 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305446 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305500 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305540 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305582 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305603 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.305622 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306339 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306404 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306422 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306443 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306463 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306479 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306497 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306513 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306526 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306541 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306555 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306595 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306612 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306626 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306640 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306655 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306671 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306687 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306701 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306718 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306735 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306755 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306782 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306800 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306818 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306835 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306869 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306887 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306905 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306918 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306936 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306961 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306978 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.306994 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307009 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307024 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307038 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307052 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307066 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307081 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307094 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307108 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307121 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307133 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307146 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307158 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307170 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307181 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307193 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307206 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307241 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307257 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307272 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307289 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307305 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.307983 4823 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308008 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308023 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308037 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308050 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308062 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308075 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308087 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308100 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308112 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308125 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308136 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308149 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308163 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308174 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308187 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308198 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308209 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308222 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308233 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308244 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308258 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308272 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308286 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308299 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308317 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308331 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308349 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308365 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308381 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308396 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308412 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308427 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308440 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308455 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308476 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308501 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308519 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308534 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308546 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308558 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308570 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308584 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308597 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308610 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308621 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308635 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308651 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308664 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308678 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308690 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308702 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308714 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308728 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308739 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308752 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308766 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308783 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308796 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308807 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308819 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308831 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308844 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308875 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308888 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308903 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308916 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308946 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308958 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308970 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308983 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.308995 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309009 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309021 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309034 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309046 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309059 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309074 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309086 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309099 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309109 4823 reconstruct.go:97] "Volume reconstruction finished" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.309118 4823 reconciler.go:26] "Reconciler: start to sync state" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.313769 4823 manager.go:324] Recovery completed Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.324369 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.326264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.326402 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.326495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.327882 4823 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.327897 4823 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.327918 4823 state_mem.go:36] "Initialized new in-memory state store" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.339136 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.339602 4823 policy_none.go:49] "None policy: Start" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.340520 4823 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.340550 4823 state_mem.go:35] "Initializing new in-memory state store" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.342301 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.342342 4823 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.342375 4823 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.342526 4823 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.344104 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.344176 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.385445 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.405680 4823 manager.go:334] "Starting Device Plugin manager" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.405920 4823 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.405953 4823 server.go:79] "Starting device plugin registration server" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.406559 4823 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.406617 4823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.406804 4823 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.407035 4823 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.407060 4823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.414070 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.442967 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.443105 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.444845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.444907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.444917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.445486 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.445745 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.445824 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.446967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447661 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447845 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.447933 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.449213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.449311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450217 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450408 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450484 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.450791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.451964 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.452061 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.452105 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453565 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.453625 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.454915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.454943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.454953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.487527 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.507521 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.508475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.508506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.508520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.508542 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.509019 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511150 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511194 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511234 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511343 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511413 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511471 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511557 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511684 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511772 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511844 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.511937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.512011 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.613783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.613911 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.613944 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.613966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.613985 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614004 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614049 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614107 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614164 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614093 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614204 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614310 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614314 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614612 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614713 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614791 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614848 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614884 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614916 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.614892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.615037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.709500 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.711558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.711642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.711663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.711706 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.712481 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.773939 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.798540 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.807613 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6c10be727797db52be8482d79f1ffa460d7038850c3cdfd0ed8cc4950bc3ba61 WatchSource:0}: Error finding container 6c10be727797db52be8482d79f1ffa460d7038850c3cdfd0ed8cc4950bc3ba61: Status 404 returned error can't find the container with id 6c10be727797db52be8482d79f1ffa460d7038850c3cdfd0ed8cc4950bc3ba61 Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.812446 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.830629 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: I0121 17:16:39.839612 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:39 crc kubenswrapper[4823]: W0121 17:16:39.862038 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b5810e72402afebadbf11559ba08805d6d437a4273b348174d970d5db3926020 WatchSource:0}: Error finding container b5810e72402afebadbf11559ba08805d6d437a4273b348174d970d5db3926020: Status 404 returned error can't find the container with id b5810e72402afebadbf11559ba08805d6d437a4273b348174d970d5db3926020 Jan 21 17:16:39 crc kubenswrapper[4823]: E0121 17:16:39.888976 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.113360 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.115213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.115267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.115278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.115312 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.116005 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 17:16:40 crc kubenswrapper[4823]: W0121 17:16:40.136490 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.136615 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:40 crc kubenswrapper[4823]: W0121 17:16:40.216214 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.216325 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.282976 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.285261 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:34:36.820669772 +0000 UTC Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.350377 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae" exitCode=0 Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.350488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.350656 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d192f8f1936cd5a1c4f0a52cbbb984ead5edecaa6f370449300757ae7b639924"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.350796 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.352277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.352319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.352333 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.353282 4823 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770" exitCode=0 Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.353387 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.353431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c10be727797db52be8482d79f1ffa460d7038850c3cdfd0ed8cc4950bc3ba61"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.353613 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.355007 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.355035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.355047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.357558 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0" exitCode=0 Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.357622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.357646 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b5810e72402afebadbf11559ba08805d6d437a4273b348174d970d5db3926020"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.357788 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.358749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.358782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.358798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.359813 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.359891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9fda562cbc63d8cf2c9bbeafd1fadb084480481e28942be8a7cfa9443c6627e6"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.361815 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7b24ba2286e6ed150be24e8349916e35188e679a8cfdc8eaee092116f4f1d32e" exitCode=0 Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.361890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7b24ba2286e6ed150be24e8349916e35188e679a8cfdc8eaee092116f4f1d32e"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.361948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6c60e5933022afd8e848f3a0e9fb2b35a41b7fb57511480cc05ddbe230e8e883"} Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.362110 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.363006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.363046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.363059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.363464 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.364510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.364559 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.364574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.406141 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cce7f2e3688ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 17:16:39.281256654 +0000 UTC m=+0.207387514,LastTimestamp:2026-01-21 17:16:39.281256654 +0000 UTC m=+0.207387514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 17:16:40 crc kubenswrapper[4823]: W0121 17:16:40.680067 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.680513 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.690477 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Jan 21 17:16:40 crc kubenswrapper[4823]: W0121 17:16:40.802280 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.802381 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.916921 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.918457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.918491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.918503 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:40 crc kubenswrapper[4823]: I0121 17:16:40.918533 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 17:16:40 crc kubenswrapper[4823]: E0121 17:16:40.919311 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.285867 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:42:02.77353079 +0000 UTC Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.365272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.365440 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.366420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.366448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.366460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.367897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.367919 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.367928 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.367964 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.368645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.368669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.368683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.371050 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.371073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.371084 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.371099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.373074 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.373140 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.373158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.373192 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.374219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.374261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.374273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.374953 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="43aa74d3343203bb661ee22a3dc9f709172fd4802eeccbd8a3b196a2dac760b1" exitCode=0 Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.374990 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"43aa74d3343203bb661ee22a3dc9f709172fd4802eeccbd8a3b196a2dac760b1"} Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.375125 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.376198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.376236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.376250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.402844 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 17:16:41 crc kubenswrapper[4823]: I0121 17:16:41.634071 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.286729 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:31:53.190415383 +0000 UTC Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.385138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76"} Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.385262 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.386966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.387040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.387168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.390008 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2d269536b5fc6eb088cbf16b4e536e2c55e594281f8772c7801acec5f30fa16d" exitCode=0 Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.390046 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2d269536b5fc6eb088cbf16b4e536e2c55e594281f8772c7801acec5f30fa16d"} Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.390149 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.390211 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.390274 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.390403 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.391228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.391276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.391297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.392565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.392593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.392605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.393480 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.393520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.393540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.519765 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.521171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.521233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.521246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:42 crc kubenswrapper[4823]: I0121 17:16:42.521283 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.212014 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.287088 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:41:51.725980447 +0000 UTC Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.357925 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.365172 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.398893 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3c93e22fa4c77374b9fb24b3cda1508e6cc468fe9d60b1aecd470661694b38c0"} Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.398975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b55524de463687a4d45e4a1ea82bc38a39a42612e2a5ef412579847514631c5"} Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.398989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13e0b09e6ac1168470c90b57f8a6c1c8835ffd65e891f1a99fd5f6895c58e752"} Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.398991 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.399143 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.399000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8cea9c878fea5a28ca3f222fbbb7ba4f2fe7dbcfe6dac5e93474332c24f24ea3"} Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.399480 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.400539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.400564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.400571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.400777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.400798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:43 crc kubenswrapper[4823]: I0121 17:16:43.400806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.279895 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.287336 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:11:42.912370101 +0000 UTC Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.409075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4ef41a23081e086dc34e573818cefec06baa11bfa37139134c7906505e7bad2"} Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.409262 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.409371 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.409262 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.410680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.410724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.410740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.411821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.411926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.411821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.411989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.412003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:44 crc kubenswrapper[4823]: I0121 17:16:44.411953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.287779 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:44:29.380818925 +0000 UTC Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.411580 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.411641 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.412643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.412701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.412719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.412764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.412782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:45 crc kubenswrapper[4823]: I0121 17:16:45.412790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.288311 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:03:36.310384886 +0000 UTC Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.363044 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.363513 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.364917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.364970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.364985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.942192 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.943259 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.944997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.945051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:46 crc kubenswrapper[4823]: I0121 17:16:46.945079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.185731 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.186952 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.188642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.188693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.188705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.280268 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.280383 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:16:47 crc kubenswrapper[4823]: I0121 17:16:47.289229 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:05:24.465346514 +0000 UTC Jan 21 17:16:48 crc kubenswrapper[4823]: I0121 17:16:48.290379 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:09:11.578956536 +0000 UTC Jan 21 17:16:49 crc kubenswrapper[4823]: I0121 17:16:49.142337 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 17:16:49 crc kubenswrapper[4823]: I0121 17:16:49.142634 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:49 crc kubenswrapper[4823]: I0121 17:16:49.144291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:49 crc kubenswrapper[4823]: I0121 17:16:49.144414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:49 crc kubenswrapper[4823]: I0121 17:16:49.144440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:49 crc kubenswrapper[4823]: I0121 17:16:49.290806 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:44:01.809750789 +0000 UTC Jan 21 17:16:49 crc kubenswrapper[4823]: E0121 17:16:49.414266 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 17:16:50 crc kubenswrapper[4823]: I0121 17:16:50.291827 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:51:49.765007967 +0000 UTC Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.284795 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.293042 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:24:40.599354307 +0000 UTC Jan 21 17:16:51 crc kubenswrapper[4823]: E0121 17:16:51.405486 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.638368 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.638550 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.639843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.639901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:51 crc kubenswrapper[4823]: I0121 17:16:51.639915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.179725 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.179797 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.187256 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.187330 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.293819 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:31:48.929309625 +0000 UTC Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.320300 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.320504 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.321830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.321921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.321936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.349798 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.429127 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.433023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.433082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.433094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:52 crc kubenswrapper[4823]: I0121 17:16:52.444798 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 17:16:53 crc kubenswrapper[4823]: I0121 17:16:53.294955 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:30:19.504238181 +0000 UTC Jan 21 17:16:53 crc kubenswrapper[4823]: I0121 17:16:53.430472 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:53 crc kubenswrapper[4823]: I0121 17:16:53.431236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:53 crc kubenswrapper[4823]: I0121 17:16:53.431267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:53 crc kubenswrapper[4823]: I0121 17:16:53.431277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:54 crc kubenswrapper[4823]: I0121 17:16:54.295600 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:02:06.463809268 +0000 UTC Jan 21 17:16:55 crc kubenswrapper[4823]: I0121 17:16:55.296193 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:31:32.925785172 +0000 UTC Jan 21 17:16:55 crc kubenswrapper[4823]: I0121 17:16:55.682568 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 17:16:55 crc kubenswrapper[4823]: I0121 17:16:55.693538 4823 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 17:16:55 crc kubenswrapper[4823]: I0121 17:16:55.710542 4823 csr.go:261] certificate signing request csr-68v69 is approved, waiting to be issued Jan 21 17:16:55 crc kubenswrapper[4823]: I0121 17:16:55.718123 4823 csr.go:257] certificate signing request csr-68v69 is issued Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.296968 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:51:22.180333667 +0000 UTC Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.719449 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 17:11:55 +0000 UTC, rotation deadline is 2026-10-21 09:09:37.969145035 +0000 UTC Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.719724 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6543h52m41.249424479s for next certificate rotation Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.948142 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.948606 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.950063 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.950227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.950255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:16:56 crc kubenswrapper[4823]: I0121 17:16:56.953980 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.180116 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.182745 4823 trace.go:236] Trace[1576848627]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 17:16:43.680) (total time: 13502ms): Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1576848627]: ---"Objects listed" error: 13502ms (17:16:57.182) Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1576848627]: [13.502307729s] [13.502307729s] END Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.182796 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.182785 4823 trace.go:236] Trace[1229436895]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 17:16:43.346) (total time: 13835ms): Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1229436895]: ---"Objects listed" error: 13835ms (17:16:57.182) Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1229436895]: [13.835944751s] [13.835944751s] END Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.182963 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.183598 4823 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.183636 4823 trace.go:236] Trace[1024132034]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 17:16:42.292) (total time: 14891ms): Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1024132034]: ---"Objects listed" error: 14891ms (17:16:57.183) Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1024132034]: [14.8910624s] [14.8910624s] END Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.183650 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.184219 4823 trace.go:236] Trace[1689087646]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 17:16:42.801) (total time: 14382ms): Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1689087646]: ---"Objects listed" error: 14382ms (17:16:57.184) Jan 21 17:16:57 crc kubenswrapper[4823]: Trace[1689087646]: [14.382362235s] [14.382362235s] END Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.184355 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.184450 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.241210 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.245480 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.285955 4823 apiserver.go:52] "Watching apiserver" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.288529 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.288757 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.289070 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.289158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.289242 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.289422 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.289473 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.289734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.289789 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.289843 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.289882 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.291675 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.291890 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.291928 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.292804 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.292889 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.292935 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.292813 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.293420 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.295033 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.297161 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:55:41.331599136 +0000 UTC Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.317931 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.327047 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.338952 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.353875 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.365297 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.374465 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.384802 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.386969 4823 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.399003 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.403944 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-q5k6p"] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.404513 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.407192 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.407211 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.407234 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.410140 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.425163 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.449535 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.466392 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484716 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484788 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484806 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484823 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484838 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484871 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484886 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484902 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484945 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484962 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.484978 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485016 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485039 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485057 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485073 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485105 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485104 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485121 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485191 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485212 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485231 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485253 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485270 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485286 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485305 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485322 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485340 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485360 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485377 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485398 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485416 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485434 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485462 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485485 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485522 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485542 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485560 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485577 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485596 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485612 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485630 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485646 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485662 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485705 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485741 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485791 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485806 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485821 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485837 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485869 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485888 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485966 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.485978 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486008 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486031 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486060 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486083 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486101 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486118 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486137 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486152 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486168 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486202 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486217 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486235 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486286 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486331 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486347 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486366 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486382 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486418 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486437 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486469 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486489 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486507 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486526 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486541 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486557 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486582 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486599 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486617 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486632 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486649 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486665 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486681 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486714 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486730 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486761 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486776 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486826 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486841 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486873 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486890 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486926 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486941 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486974 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486991 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487007 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487026 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487048 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487064 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487082 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487099 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487120 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487137 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487155 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487187 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487204 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487220 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487266 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487284 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487317 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487335 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487353 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487370 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487388 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487406 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487424 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487475 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487491 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487520 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487588 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487626 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487643 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487664 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487680 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487714 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487732 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487749 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487781 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487799 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487817 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487835 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487868 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487885 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487902 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487920 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487939 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487979 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487995 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488011 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488028 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488045 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488063 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488079 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488194 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488211 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488245 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488261 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488294 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488314 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488330 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488346 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488365 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488384 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488401 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488419 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488436 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488452 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488469 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488488 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488522 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488555 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488572 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488609 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488735 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488779 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488803 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488826 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488891 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488974 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488999 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489046 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489060 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489072 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489082 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491277 4823 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.492670 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.493147 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486109 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486147 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486311 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486354 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486472 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486517 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486616 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486821 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486843 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486953 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.486959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487048 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487269 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487333 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487361 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487379 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487396 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487456 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487516 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487560 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487576 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487750 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487799 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487829 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487833 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487908 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.487936 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488109 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488128 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488138 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488318 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488354 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488371 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488448 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488550 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488548 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488686 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488735 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488753 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488765 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.488988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489056 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489199 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489304 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489314 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489353 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489482 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489493 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489836 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.489930 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490101 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490105 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490223 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490226 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490324 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490440 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490439 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490523 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490687 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490702 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490892 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490934 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.490959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.491113 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:16:57.991082438 +0000 UTC m=+18.917213298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.504406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.504746 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.505327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.505388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.505520 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.505875 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.505764 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.506501 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.506717 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.507122 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.507526 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:58.007508994 +0000 UTC m=+18.933639854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.507637 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.507671 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:58.007663298 +0000 UTC m=+18.933794158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.507920 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.508132 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.508725 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.509072 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.509680 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.509931 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.510006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.510195 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.510282 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.510529 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.510652 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.515406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.515752 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.517618 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.518292 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.518490 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.518757 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.519127 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.510983 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.520304 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.521619 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.522010 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.529407 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.529688 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.530095 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.530325 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.530533 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.530746 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.531039 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.531293 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.532628 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.533112 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.533295 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.533810 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.533981 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.541260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.541478 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.541497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.546237 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.546529 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.547074 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.547272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.548254 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.548406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.548896 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.549710 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.549730 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.550881 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.550742 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.550895 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.550953 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.550991 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:58.05097216 +0000 UTC m=+18.977103020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491235 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491334 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491907 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491992 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.503462 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.551125 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.551140 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.551152 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:57 crc kubenswrapper[4823]: E0121 17:16:57.551198 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:58.051184056 +0000 UTC m=+18.977314916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551228 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551431 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551597 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551607 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551772 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551820 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551843 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551918 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47022->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.551993 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47022->192.168.126.11:17697: read: connection reset by peer" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552103 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552435 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552658 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552736 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.552898 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553036 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553081 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553111 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553101 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553328 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553332 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553525 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.553821 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.491172 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.548267 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.554031 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.554145 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.554210 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.554379 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.554527 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.555130 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.555829 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.556165 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.556266 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.557500 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.561238 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.561619 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.561699 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.561758 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.562151 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.567084 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.567294 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.567418 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.568230 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.568260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.568292 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.568564 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.568568 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.568768 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.569135 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.571837 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.577885 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.580149 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.580600 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586390 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586550 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586561 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586727 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586795 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586813 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.586878 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.588345 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.588782 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.589270 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.589526 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.589571 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftvm\" (UniqueName: \"kubernetes.io/projected/a9a35441-43d2-44fd-b8c7-5fe354ebae4d-kube-api-access-lftvm\") pod \"node-resolver-q5k6p\" (UID: \"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\") " pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.589600 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.589654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9a35441-43d2-44fd-b8c7-5fe354ebae4d-hosts-file\") pod \"node-resolver-q5k6p\" (UID: \"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\") " pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590022 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590191 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590286 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590309 4823 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590322 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590333 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590342 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590351 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590361 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590372 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590381 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590390 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590399 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590408 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590418 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590428 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590438 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590446 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590455 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590464 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590474 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590483 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590491 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590501 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590511 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590521 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590533 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590544 4823 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590554 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590564 4823 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590574 4823 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590583 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590592 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590601 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590610 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590618 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590627 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590635 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590644 4823 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590665 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590675 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590690 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590699 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590709 4823 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590719 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590728 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590736 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590745 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590754 4823 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590763 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590771 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590781 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590789 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590805 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590817 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590826 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.590834 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.838980 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839072 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839135 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839185 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839197 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839286 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839315 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839325 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839334 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839347 4823 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839356 4823 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839364 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839405 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839420 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839429 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839482 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839564 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839580 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839609 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839623 4823 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.839646 4823 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.840786 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842090 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842148 4823 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842161 4823 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842289 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842310 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842319 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842364 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842415 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842472 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842483 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842494 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842569 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842580 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842591 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842603 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842687 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842728 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842755 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842784 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842798 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842829 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842838 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842893 4823 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842922 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842978 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.842996 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843028 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843042 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843093 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843109 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843130 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843142 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843159 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843170 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843252 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843265 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843278 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843296 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843308 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843321 4823 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843369 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843388 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843401 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843556 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843572 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843590 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843627 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843640 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843658 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843674 4823 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843686 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843700 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843750 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843765 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843779 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843793 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843874 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843890 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843902 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843914 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843935 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843947 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.843982 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.844001 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.844012 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.844414 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855112 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855205 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855353 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855596 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855628 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855644 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855656 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855668 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855680 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855692 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855704 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855716 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855728 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855751 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855829 4823 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855884 4823 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855902 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855919 4823 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855931 4823 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855944 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855956 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855968 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855980 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.855993 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856005 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856017 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856030 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856043 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856055 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856069 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856082 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856186 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856206 4823 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856221 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856234 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856246 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856289 4823 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856312 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856325 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856337 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856349 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856360 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856371 4823 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856383 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856393 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856405 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856416 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.856428 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.859103 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.860891 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.867419 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: W0121 17:16:57.874616 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d1d2feb5c3c5e7f55ed47db2be7722b0a60ec08071a28aac6efa9d722c8b5536 WatchSource:0}: Error finding container d1d2feb5c3c5e7f55ed47db2be7722b0a60ec08071a28aac6efa9d722c8b5536: Status 404 returned error can't find the container with id d1d2feb5c3c5e7f55ed47db2be7722b0a60ec08071a28aac6efa9d722c8b5536 Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.879169 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.956901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftvm\" (UniqueName: \"kubernetes.io/projected/a9a35441-43d2-44fd-b8c7-5fe354ebae4d-kube-api-access-lftvm\") pod \"node-resolver-q5k6p\" (UID: \"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\") " pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.956972 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9a35441-43d2-44fd-b8c7-5fe354ebae4d-hosts-file\") pod \"node-resolver-q5k6p\" (UID: \"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\") " pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.957002 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.957016 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.957027 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.957041 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.957055 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.957105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9a35441-43d2-44fd-b8c7-5fe354ebae4d-hosts-file\") pod \"node-resolver-q5k6p\" (UID: \"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\") " pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:57 crc kubenswrapper[4823]: I0121 17:16:57.973983 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftvm\" (UniqueName: \"kubernetes.io/projected/a9a35441-43d2-44fd-b8c7-5fe354ebae4d-kube-api-access-lftvm\") pod \"node-resolver-q5k6p\" (UID: \"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\") " pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.015677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q5k6p" Jan 21 17:16:58 crc kubenswrapper[4823]: W0121 17:16:58.033002 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9a35441_43d2_44fd_b8c7_5fe354ebae4d.slice/crio-0f0217edf040f8937333cfbf73777a112b7d84296b6040c689a89898583473ce WatchSource:0}: Error finding container 0f0217edf040f8937333cfbf73777a112b7d84296b6040c689a89898583473ce: Status 404 returned error can't find the container with id 0f0217edf040f8937333cfbf73777a112b7d84296b6040c689a89898583473ce Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.057432 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.057530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.057563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.057607 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.057635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.057795 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.057838 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.057878 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.057925 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:59.057910136 +0000 UTC m=+19.984040996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058480 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:16:59.05846909 +0000 UTC m=+19.984599950 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058693 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058734 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:59.058725886 +0000 UTC m=+19.984856746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058751 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058659 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058789 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058798 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058842 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:59.058812618 +0000 UTC m=+19.984943548 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.058882 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:16:59.05887284 +0000 UTC m=+19.985003810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.298265 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:01:35.627708722 +0000 UTC Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.443239 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q5k6p" event={"ID":"a9a35441-43d2-44fd-b8c7-5fe354ebae4d","Type":"ContainerStarted","Data":"7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.443306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q5k6p" event={"ID":"a9a35441-43d2-44fd-b8c7-5fe354ebae4d","Type":"ContainerStarted","Data":"0f0217edf040f8937333cfbf73777a112b7d84296b6040c689a89898583473ce"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.444192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.444263 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"793a8b27019a2379e51a030df5b981cc96b5bbc15a06e57755a9500877d49916"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.444945 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d1d2feb5c3c5e7f55ed47db2be7722b0a60ec08071a28aac6efa9d722c8b5536"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.446349 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.447962 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76" exitCode=255 Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.448046 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.448535 4823 scope.go:117] "RemoveContainer" containerID="5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.449700 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.449739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.449752 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2295cd4cd1aa37f18758079144cc904a3966fa4aebb5717c68cc47146565793a"} Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.459725 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.473514 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.484940 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.506172 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.530556 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.546467 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.564078 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.578955 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.596392 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.632581 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.660495 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.685007 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.727947 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.752805 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.797332 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.822292 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.837037 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.848528 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.863117 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-4m4vw"] Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.863450 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.864538 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tsbbs"] Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.865394 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.865427 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.865694 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.865752 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.865831 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.866439 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-skvzm"] Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.866633 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.866642 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7q2df"] Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.867105 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.867503 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.869602 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.869757 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 17:16:58 crc kubenswrapper[4823]: W0121 17:16:58.870351 4823 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.870441 4823 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.870688 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.870963 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 17:16:58 crc kubenswrapper[4823]: W0121 17:16:58.871003 4823 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.871023 4823 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.871083 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 17:16:58 crc kubenswrapper[4823]: W0121 17:16:58.871101 4823 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 17:16:58 crc kubenswrapper[4823]: E0121 17:16:58.871127 4823 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.872242 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.872257 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.872405 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.872508 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.873104 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.873264 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.880588 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.889152 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.891281 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.905320 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.915673 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.924578 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.937334 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.949288 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-k8s-cni-cncf-io\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-conf-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-systemd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-etc-kubernetes\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-kubelet\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aedcad4-c5da-40a2-a783-ce9096a63c6e-proxy-tls\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965677 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-systemd-units\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965718 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-log-socket\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965741 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-cni-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965757 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-cni-binary-copy\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965773 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-cni-bin\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-slash\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-ovn\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965829 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-bin\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965877 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aedcad4-c5da-40a2-a783-ce9096a63c6e-mcd-auth-proxy-config\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965901 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-os-release\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965917 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-hostroot\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.965950 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-kubelet\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966003 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-daemon-config\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-multus-certs\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966077 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv5d\" (UniqueName: \"kubernetes.io/projected/48951ca6-6148-41a8-bdc2-d753cf3ecea9-kube-api-access-7jv5d\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966117 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966157 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovn-node-metrics-cert\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966186 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2fdd\" (UniqueName: \"kubernetes.io/projected/b5f1d66f-b00f-4e75-8130-43977e13eec8-kube-api-access-t2fdd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966206 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-cni-multus\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966232 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cnibin\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-netns\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966267 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-netns\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966284 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrpp\" (UniqueName: \"kubernetes.io/projected/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-kube-api-access-hlrpp\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966316 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-node-log\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966332 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-cnibin\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-etc-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966372 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966388 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-netd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-socket-dir-parent\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966423 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-system-cni-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-os-release\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966510 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-script-lib\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966525 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48ww\" (UniqueName: \"kubernetes.io/projected/7aedcad4-c5da-40a2-a783-ce9096a63c6e-kube-api-access-m48ww\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966539 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-system-cni-dir\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966554 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cni-binary-copy\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-var-lib-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966630 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-ovn-kubernetes\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-env-overrides\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.966693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7aedcad4-c5da-40a2-a783-ce9096a63c6e-rootfs\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.967402 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.980433 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:58 crc kubenswrapper[4823]: I0121 17:16:58.993014 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.015417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.026227 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.035603 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.054461 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068414 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.068543 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:17:01.068513928 +0000 UTC m=+21.994644788 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aedcad4-c5da-40a2-a783-ce9096a63c6e-mcd-auth-proxy-config\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068654 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-os-release\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-hostroot\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068708 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-slash\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068731 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-ovn\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-bin\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068778 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068800 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-kubelet\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068820 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-daemon-config\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068846 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068900 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovn-node-metrics-cert\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2fdd\" (UniqueName: \"kubernetes.io/projected/b5f1d66f-b00f-4e75-8130-43977e13eec8-kube-api-access-t2fdd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068950 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-cni-multus\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-multus-certs\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.068998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jv5d\" (UniqueName: \"kubernetes.io/projected/48951ca6-6148-41a8-bdc2-d753cf3ecea9-kube-api-access-7jv5d\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069053 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrpp\" (UniqueName: \"kubernetes.io/projected/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-kube-api-access-hlrpp\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069081 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cnibin\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069103 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-netns\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069125 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-netns\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069147 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-node-log\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069171 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-cnibin\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-netd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-os-release\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069254 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-socket-dir-parent\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069322 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-etc-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069367 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069382 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069396 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-bin\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069432 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-hostroot\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069439 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:01.06942068 +0000 UTC m=+21.995551540 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-slash\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069477 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-socket-dir-parent\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069561 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069592 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-cni-multus\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-etc-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069666 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:01.069647506 +0000 UTC m=+21.995778436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069692 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069704 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-kubelet\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069717 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-system-cni-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069736 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cnibin\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069742 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-netns\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069506 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-ovn\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-node-log\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069778 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-script-lib\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-os-release\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069643 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-multus-certs\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.069805 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069815 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-netns\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069828 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069880 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-system-cni-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069914 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-ovn-kubernetes\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069980 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-env-overrides\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070006 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7aedcad4-c5da-40a2-a783-ce9096a63c6e-rootfs\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070032 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48ww\" (UniqueName: \"kubernetes.io/projected/7aedcad4-c5da-40a2-a783-ce9096a63c6e-kube-api-access-m48ww\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.070043 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:01.070029265 +0000 UTC m=+21.996160215 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-os-release\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069984 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-netd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.069801 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-cnibin\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070081 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7aedcad4-c5da-40a2-a783-ce9096a63c6e-rootfs\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070104 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-ovn-kubernetes\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-system-cni-dir\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070141 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-system-cni-dir\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070151 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cni-binary-copy\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070304 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-var-lib-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070347 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-var-lib-openvswitch\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070353 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-k8s-cni-cncf-io\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-conf-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-systemd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070467 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-etc-kubernetes\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070500 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-daemon-config\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070541 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-script-lib\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070562 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-kubelet\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-systemd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aedcad4-c5da-40a2-a783-ce9096a63c6e-proxy-tls\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.070590 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070599 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-run-k8s-cni-cncf-io\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070607 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-etc-kubernetes\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.070618 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070662 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-cni-binary-copy\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-env-overrides\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.070670 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070591 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-conf-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-cni-bin\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070746 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-systemd-units\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-log-socket\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070803 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-cni-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070866 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-log-socket\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.070907 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-systemd-units\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.070947 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:01.070939007 +0000 UTC m=+21.997069867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071021 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48951ca6-6148-41a8-bdc2-d753cf3ecea9-cni-binary-copy\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-multus-cni-dir\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071099 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-kubelet\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-host-var-lib-cni-bin\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48951ca6-6148-41a8-bdc2-d753cf3ecea9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071443 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-cni-binary-copy\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071677 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aedcad4-c5da-40a2-a783-ce9096a63c6e-mcd-auth-proxy-config\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.071843 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.082927 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aedcad4-c5da-40a2-a783-ce9096a63c6e-proxy-tls\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.091597 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2fdd\" (UniqueName: \"kubernetes.io/projected/b5f1d66f-b00f-4e75-8130-43977e13eec8-kube-api-access-t2fdd\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.094133 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jv5d\" (UniqueName: \"kubernetes.io/projected/48951ca6-6148-41a8-bdc2-d753cf3ecea9-kube-api-access-7jv5d\") pod \"multus-additional-cni-plugins-tsbbs\" (UID: \"48951ca6-6148-41a8-bdc2-d753cf3ecea9\") " pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.095012 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.095210 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48ww\" (UniqueName: \"kubernetes.io/projected/7aedcad4-c5da-40a2-a783-ce9096a63c6e-kube-api-access-m48ww\") pod \"machine-config-daemon-4m4vw\" (UID: \"7aedcad4-c5da-40a2-a783-ce9096a63c6e\") " pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.095614 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrpp\" (UniqueName: \"kubernetes.io/projected/ea8699bd-e53a-443e-b2e5-0fe577f2c19f-kube-api-access-hlrpp\") pod \"multus-skvzm\" (UID: \"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\") " pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.111714 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.126385 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.138718 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.154270 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.167569 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.177414 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.181733 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.183838 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-skvzm" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.198514 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.198886 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.203363 4823 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.204033 4823 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.204163 4823 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.204279 4823 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.204398 4823 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.204698 4823 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.205014 4823 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.205125 4823 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.206171 4823 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207366 4823 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207417 4823 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207477 4823 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207534 4823 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207592 4823 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.207419 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/events\": read tcp 38.102.83.38:43266->38.102.83.38:6443: use of closed network connection" event="&Event{ObjectMeta:{multus-skvzm.188cce83d1618db7 openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:multus-skvzm,UID:ea8699bd-e53a-443e-b2e5-0fe577f2c19f,APIVersion:v1,ResourceVersion:26702,FieldPath:spec.containers{kube-multus},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 17:16:59.198631351 +0000 UTC m=+20.124762211,LastTimestamp:2026-01-21 17:16:59.198631351 +0000 UTC m=+20.124762211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207648 4823 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207686 4823 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: W0121 17:16:59.207802 4823 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.299223 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:59:38.452850841 +0000 UTC Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.343606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.343749 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.343880 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.343882 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.343972 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:16:59 crc kubenswrapper[4823]: E0121 17:16:59.344083 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.350806 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.351969 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.353699 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.354837 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.356128 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.356758 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.357483 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.358747 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.359497 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.360645 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.361297 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.362737 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.363642 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.364742 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.365189 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.365400 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.366081 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.367451 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.367939 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.368588 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.369800 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.370498 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.371561 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.376455 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.377632 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.378814 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.379581 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.381017 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.381522 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.381907 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.382429 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.383616 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.384292 4823 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.384398 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.387653 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.388283 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.388872 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.390769 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.391484 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.392149 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.393397 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.394675 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.395235 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.395813 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.396866 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.398454 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.401741 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.406009 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.406590 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.407729 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.408774 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.419973 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.421645 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.422466 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.423151 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.424266 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.424937 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.426918 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.443326 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.459362 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.459436 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.459447 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"a9e06447d2cd2b6e2b2f85d76ab7d7cb5acafab86f5fe1262f95ac4bdb16585f"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.459888 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.463133 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerStarted","Data":"40ef1d550e53df05a3a1b30f38b7074e135059036f309a075edfc92af03d5280"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.464311 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerStarted","Data":"788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.464374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerStarted","Data":"d7b2aa9d3282d843a59e7900db279deb583e1914d5efe4f501ad0cdb7c07d16a"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.466817 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.468416 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2"} Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.468915 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.475614 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.489378 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.504259 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.518426 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.529695 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.541111 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.559440 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.574040 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.586887 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.602417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.616754 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.635159 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.687668 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.705533 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.732641 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.749657 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.771685 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.789724 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.869353 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.883415 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:16:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:16:59 crc kubenswrapper[4823]: I0121 17:16:59.996020 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.005416 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovn-node-metrics-cert\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.070713 4823 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.070827 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config podName:b5f1d66f-b00f-4e75-8130-43977e13eec8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:00.570804841 +0000 UTC m=+21.496935701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config") pod "ovnkube-node-7q2df" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8") : failed to sync configmap cache: timed out waiting for the condition Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.077159 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.156958 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.159091 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.275233 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.276824 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.299315 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.299394 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:27:21.201505847 +0000 UTC Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.356115 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.384518 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.386936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.386992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.387004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.387105 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.394649 4823 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.395055 4823 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.396004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.396031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.396039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.396052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.396062 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.410790 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.416707 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.420433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.420482 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.420493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.420510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.420522 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.432130 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.433304 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.436619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.436654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.436666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.436680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.436689 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.443223 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.447708 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.451771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.451816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.451828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.451847 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.451876 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.464893 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.468408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.468463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.468496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.468514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.468532 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.471496 4823 generic.go:334] "Generic (PLEG): container finished" podID="48951ca6-6148-41a8-bdc2-d753cf3ecea9" containerID="f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a" exitCode=0 Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.472038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerDied","Data":"f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a"} Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.487398 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: E0121 17:17:00.487598 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.489265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.489320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.489335 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.489362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.489379 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.490717 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.503603 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.505779 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.518375 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.529548 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.531282 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.541282 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.543023 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.553078 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.560287 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.573111 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.582842 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.585279 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.585964 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config\") pod \"ovnkube-node-7q2df\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.591947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.591977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.591986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.591999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.592009 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.599147 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.608717 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.611872 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.612045 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.621085 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.622117 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.633397 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.647695 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.661178 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.674602 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.685018 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.693731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.693780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.693790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.693804 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.693814 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.694678 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.696074 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.710722 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.733240 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.778205 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.796233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.796267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.796275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.796290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.796300 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.815740 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.852986 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.894026 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.898991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.899028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.899038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.899054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.899066 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:00Z","lastTransitionTime":"2026-01-21T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.933495 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:00 crc kubenswrapper[4823]: I0121 17:17:00.974325 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:00Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.001397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.001423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.001431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.001444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.001453 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.017267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.059700 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.090162 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.090246 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.090276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.090300 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.090328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090429 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090498 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:17:05.090454992 +0000 UTC m=+26.016585892 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090574 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:05.090554415 +0000 UTC m=+26.016685385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090576 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090593 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090624 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090629 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090651 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090656 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090754 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:05.090730529 +0000 UTC m=+26.016861509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090814 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:05.090796771 +0000 UTC m=+26.016927731 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090835 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.090992 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:05.090968285 +0000 UTC m=+26.017099185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.104453 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.104901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.105053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.105168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.105309 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.207908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.207956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.207967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.207984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.207995 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.299796 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:22:21.814803599 +0000 UTC Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.310809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.310907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.310949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.310975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.310992 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.343288 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.343733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.343424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.344041 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.343362 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:01 crc kubenswrapper[4823]: E0121 17:17:01.344261 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.414095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.414466 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.414545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.414618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.414674 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.477296 4823 generic.go:334] "Generic (PLEG): container finished" podID="48951ca6-6148-41a8-bdc2-d753cf3ecea9" containerID="3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee" exitCode=0 Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.477385 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerDied","Data":"3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.479570 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.480997 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" exitCode=0 Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.481042 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.481069 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"41e5b277bfbf6f484e84e0d754acadb7975d8cf7c4559fbf042f2550b7f9179d"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.513673 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.518245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.518275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.518283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.518298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.518308 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.528780 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.541291 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.554918 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.568376 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.585280 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.596784 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.613528 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.621749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.621784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.621793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.621808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.621819 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.626685 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.640146 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.666071 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.678692 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.692478 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.707581 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.721430 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.725424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.725470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.725481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.725498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.725510 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.734186 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.747774 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.776144 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.816502 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.827844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.827919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.827930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.827950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.827962 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.855055 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.892535 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.930573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.930609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.930621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.930635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.930645 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:01Z","lastTransitionTime":"2026-01-21T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.940514 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:01 crc kubenswrapper[4823]: I0121 17:17:01.980877 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:01Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.018409 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.033614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.033671 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.033693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.033751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.033776 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.054332 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.096633 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.135777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.135836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.135922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.135951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.135981 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.238745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.238776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.238786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.238802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.238813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.250617 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-m6pdc"] Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.251943 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.253563 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.254055 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.254192 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.254316 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.267110 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.282284 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.296815 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.302603 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e6b2e38e-0844-4f45-8f61-0e1ee997556a-serviceca\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.302738 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmp6\" (UniqueName: \"kubernetes.io/projected/e6b2e38e-0844-4f45-8f61-0e1ee997556a-kube-api-access-mdmp6\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.302787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6b2e38e-0844-4f45-8f61-0e1ee997556a-host\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.302954 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:47:08.167360357 +0000 UTC Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.341707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.341748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.341759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.341779 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.341793 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.344155 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.379942 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.404038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6b2e38e-0844-4f45-8f61-0e1ee997556a-host\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.404091 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e6b2e38e-0844-4f45-8f61-0e1ee997556a-serviceca\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.404122 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdmp6\" (UniqueName: \"kubernetes.io/projected/e6b2e38e-0844-4f45-8f61-0e1ee997556a-kube-api-access-mdmp6\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.404127 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6b2e38e-0844-4f45-8f61-0e1ee997556a-host\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.404984 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e6b2e38e-0844-4f45-8f61-0e1ee997556a-serviceca\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.413712 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.443917 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdmp6\" (UniqueName: \"kubernetes.io/projected/e6b2e38e-0844-4f45-8f61-0e1ee997556a-kube-api-access-mdmp6\") pod \"node-ca-m6pdc\" (UID: \"e6b2e38e-0844-4f45-8f61-0e1ee997556a\") " pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.451249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.451278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.451291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.451305 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.451318 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.474661 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.490654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.490698 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.490708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.490717 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.490727 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.490737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.492960 4823 generic.go:334] "Generic (PLEG): container finished" podID="48951ca6-6148-41a8-bdc2-d753cf3ecea9" containerID="4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98" exitCode=0 Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.493066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerDied","Data":"4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.516709 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.551902 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.553548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.553580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.553592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.553609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.553620 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.594362 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.599452 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m6pdc" Jan 21 17:17:02 crc kubenswrapper[4823]: W0121 17:17:02.612409 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6b2e38e_0844_4f45_8f61_0e1ee997556a.slice/crio-1cc11d3c793b31e1a4c25324f4dbe3d952856f1ffe9aa4ce272849f608fed5f7 WatchSource:0}: Error finding container 1cc11d3c793b31e1a4c25324f4dbe3d952856f1ffe9aa4ce272849f608fed5f7: Status 404 returned error can't find the container with id 1cc11d3c793b31e1a4c25324f4dbe3d952856f1ffe9aa4ce272849f608fed5f7 Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.634776 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.655732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.655769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.655778 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.655791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.655801 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.673922 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.714152 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.758451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.758494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.758502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.758517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.758527 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.763294 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.794791 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.833514 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.861835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.861906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.861916 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.861931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.861942 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.873527 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.917652 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.956969 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.964970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.965040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.965069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.965101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.965123 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:02Z","lastTransitionTime":"2026-01-21T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:02 crc kubenswrapper[4823]: I0121 17:17:02.995015 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:02Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.039812 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.067990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.068044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.068716 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.068764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.068785 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.083520 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.113263 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.163587 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.171749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.171888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.171977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.172053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.172123 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.198825 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.239610 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.274730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.274791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.274815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.274840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.274899 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.284551 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.303544 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:17:25.700844343 +0000 UTC Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.319819 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.343452 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:03 crc kubenswrapper[4823]: E0121 17:17:03.343658 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.343806 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:03 crc kubenswrapper[4823]: E0121 17:17:03.343944 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.344203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:03 crc kubenswrapper[4823]: E0121 17:17:03.344328 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.377893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.377932 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.377944 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.377960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.377972 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.480775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.480815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.480828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.480844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.480911 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.498635 4823 generic.go:334] "Generic (PLEG): container finished" podID="48951ca6-6148-41a8-bdc2-d753cf3ecea9" containerID="edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a" exitCode=0 Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.498720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerDied","Data":"edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.501603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m6pdc" event={"ID":"e6b2e38e-0844-4f45-8f61-0e1ee997556a","Type":"ContainerStarted","Data":"4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.501660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m6pdc" event={"ID":"e6b2e38e-0844-4f45-8f61-0e1ee997556a","Type":"ContainerStarted","Data":"1cc11d3c793b31e1a4c25324f4dbe3d952856f1ffe9aa4ce272849f608fed5f7"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.518300 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.533340 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.555128 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.565292 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.581122 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.583256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.583294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.583307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.583323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.583334 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.592809 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.604832 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.634065 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.674716 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.686278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.686318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.686329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.686344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.686356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.713016 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.756893 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.789117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.789177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.789190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.789206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.789250 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.793316 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.838908 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.874476 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.892657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.892720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.892742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.892921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.893012 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.920189 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.953333 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.995412 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:03Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.996183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.996221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.996231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.996250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:03 crc kubenswrapper[4823]: I0121 17:17:03.996261 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:03Z","lastTransitionTime":"2026-01-21T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.032507 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.075740 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.100163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.100225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.100238 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.100255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.100273 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.114113 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.157471 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.197521 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.202621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.202687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.202704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.202725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.202738 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.236600 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.274807 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.303722 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:17:47.165343043 +0000 UTC Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.305286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.305320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.305329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.305345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.305355 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.318767 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.358835 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.396231 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.407401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.407441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.407458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.407473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.407483 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.435208 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.506405 4823 generic.go:334] "Generic (PLEG): container finished" podID="48951ca6-6148-41a8-bdc2-d753cf3ecea9" containerID="d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b" exitCode=0 Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.506475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerDied","Data":"d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.510856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.510945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.510956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.510972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.510981 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.518838 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.533160 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.553956 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.596028 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.613637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.613671 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.613683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.613700 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.613711 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.632136 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.675099 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.713948 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.716157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.716220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.716232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.716259 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.716273 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.752055 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.798441 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.818985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.819014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.819025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.819039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.819052 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.839352 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.877179 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.914966 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.921899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.921939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.921954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.921977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.922002 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:04Z","lastTransitionTime":"2026-01-21T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.956684 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:04 crc kubenswrapper[4823]: I0121 17:17:04.995660 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.024153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.024190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.024202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.024216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.024228 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.127021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.127079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.127098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.127121 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.127138 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.133075 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.133206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.133318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.133373 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133413 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:17:13.133378716 +0000 UTC m=+34.059509626 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.133465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133511 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133525 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133584 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133616 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133621 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133670 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133622 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:13.133598401 +0000 UTC m=+34.059729301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133718 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:13.133694624 +0000 UTC m=+34.059825524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133544 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133748 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:13.133731875 +0000 UTC m=+34.059862765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133751 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.133831 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:13.133809356 +0000 UTC m=+34.059940256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.230069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.230141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.230165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.230194 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.230215 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.304183 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:04:16.169798574 +0000 UTC Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.332239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.332286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.332299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.332316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.332329 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.344078 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.344146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.344202 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.344280 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.344337 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:05 crc kubenswrapper[4823]: E0121 17:17:05.344385 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.435245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.435281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.435291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.435309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.435319 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.516440 4823 generic.go:334] "Generic (PLEG): container finished" podID="48951ca6-6148-41a8-bdc2-d753cf3ecea9" containerID="ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1" exitCode=0 Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.516527 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerDied","Data":"ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.521438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.537136 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.540241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.540283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.540300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.540321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.540334 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.551457 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.572625 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.598237 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.613655 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.625634 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.637791 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.643085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.643115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.643125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.643141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.643152 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.654414 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.668142 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.679103 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.690388 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.703892 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.716671 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.727524 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:05Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.745353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.745379 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.745389 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.745404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.745413 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.847902 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.847935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.847947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.848007 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.848018 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.951529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.951571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.951588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.951610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:05 crc kubenswrapper[4823]: I0121 17:17:05.951626 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:05Z","lastTransitionTime":"2026-01-21T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.055829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.055928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.055953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.055990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.056017 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.159679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.159734 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.159753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.159781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.159801 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.262704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.262742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.262752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.262766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.262777 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.304401 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:57:50.81671546 +0000 UTC Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.364618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.364661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.364673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.364691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.364704 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.466718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.466775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.466786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.466802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.466814 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.528374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" event={"ID":"48951ca6-6148-41a8-bdc2-d753cf3ecea9","Type":"ContainerStarted","Data":"96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.540619 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.551804 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.562929 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.569352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.569386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.569395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.569408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.569416 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.579481 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.591644 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.603587 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.617372 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.637103 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.649244 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.660593 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.671156 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.671267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.671303 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.671314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.671331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.671341 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.688265 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.697584 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.711241 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:06Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.773748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.773789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.773800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.773815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.773826 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.875933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.875970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.875978 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.875991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.876006 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.979890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.980449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.980463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.980486 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:06 crc kubenswrapper[4823]: I0121 17:17:06.980497 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:06Z","lastTransitionTime":"2026-01-21T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.083775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.083818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.083829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.083845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.083875 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.186493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.186569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.186585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.186666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.186689 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.289474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.289536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.289554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.289578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.289597 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.304663 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:37:43.982222058 +0000 UTC Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.343404 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.343454 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:07 crc kubenswrapper[4823]: E0121 17:17:07.343564 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.343621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:07 crc kubenswrapper[4823]: E0121 17:17:07.343639 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:07 crc kubenswrapper[4823]: E0121 17:17:07.343760 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.391941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.391973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.391983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.391996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.392005 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.493881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.493924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.493935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.493949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.493960 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.535185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.548417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.562083 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.576639 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.593121 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.664157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.664203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.664221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.664245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.664258 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.679849 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.692510 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.703257 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.718260 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.728782 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.754480 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.766527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.766570 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.766581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.766599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.766612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.767383 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.787226 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.829897 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.847677 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:07Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.868166 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.868195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.868203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.868217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.868225 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.970967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.971031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.971054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.971085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:07 crc kubenswrapper[4823]: I0121 17:17:07.971111 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:07Z","lastTransitionTime":"2026-01-21T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.074272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.074344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.074367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.074397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.074419 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.177124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.177182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.177202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.177224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.177239 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.278741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.278772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.278781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.278796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.278805 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.305259 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:44:46.435684237 +0000 UTC Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.380503 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.380539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.380549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.380564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.380574 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.482757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.482797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.482809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.482824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.482835 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.538056 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.538413 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.538474 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.559132 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.560653 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.577497 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.585778 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.585812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.585830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.585847 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.585878 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.592652 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.606019 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.614935 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.626215 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.642387 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.666862 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.679389 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.688179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.688216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.688226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.688243 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.688252 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.696963 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.707811 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.720931 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.733313 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.745662 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.757987 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.769720 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.781910 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.790363 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.790401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.790409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.790424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.790435 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.793692 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.805123 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.814559 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.826593 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.835020 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.847708 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.857292 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.868372 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.880569 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.891953 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.892922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.892955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.892964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.892980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.892989 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.902451 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.926189 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:08Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.995525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.995562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.995572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.995588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:08 crc kubenswrapper[4823]: I0121 17:17:08.995599 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:08Z","lastTransitionTime":"2026-01-21T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.098011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.098051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.098062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.098079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.098093 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.203114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.203162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.203174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.203217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.203230 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.305341 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:04:22.601121155 +0000 UTC Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.305624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.305645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.305656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.305674 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.305684 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.343275 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.343275 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:09 crc kubenswrapper[4823]: E0121 17:17:09.343389 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.343423 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:09 crc kubenswrapper[4823]: E0121 17:17:09.343481 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:09 crc kubenswrapper[4823]: E0121 17:17:09.343531 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.362007 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.379277 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.394706 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.408358 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.408543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.408575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.408586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.408600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.408612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.422636 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.437665 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.453091 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.467694 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.477555 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.489908 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.503136 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.510920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.510951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.510963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.510979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.510990 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.517001 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.528322 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.543039 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/0.log" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.546112 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0" exitCode=1 Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.546213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.546994 4823 scope.go:117] "RemoveContainer" containerID="ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.547108 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.559693 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.578280 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.590773 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.604622 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.613092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.613138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.613148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.613165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.613176 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.614587 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.626863 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.638409 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.649209 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.662025 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.679886 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:09Z\\\",\\\"message\\\":\\\" 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:08.705552 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:08.705557 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:08.705590 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 17:17:08.705601 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 17:17:08.705623 6132 factory.go:656] Stopping watch factory\\\\nI0121 17:17:08.705649 6132 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:08.705650 6132 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:08.705659 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:08.705666 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:08.705680 6132 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 17:17:08.705687 6132 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:08.705700 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:08.705703 6132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:08.705924 6132 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.694584 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.707420 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.716097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.716338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.716406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.716471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.716530 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.720608 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.731358 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:09Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.818995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.819037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.819048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.819064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.819076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.921053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.921089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.921100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.921118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:09 crc kubenswrapper[4823]: I0121 17:17:09.921129 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:09Z","lastTransitionTime":"2026-01-21T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.024471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.024525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.024537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.024558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.024570 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.135131 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.135194 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.135205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.135219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.135228 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.237622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.237665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.237677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.237696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.237707 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.306342 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:56:05.944060752 +0000 UTC Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.340754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.340800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.340817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.340838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.340882 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.384555 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt"] Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.385073 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.387010 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.387245 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.402368 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.412520 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.422892 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.443122 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.443172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.443019 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:09Z\\\",\\\"message\\\":\\\" 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:08.705552 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:08.705557 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:08.705590 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 17:17:08.705601 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 17:17:08.705623 6132 factory.go:656] Stopping watch factory\\\\nI0121 17:17:08.705649 6132 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:08.705650 6132 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:08.705659 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:08.705666 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:08.705680 6132 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 17:17:08.705687 6132 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:08.705700 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:08.705703 6132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:08.705924 6132 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.443183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.443366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.443387 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.460614 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.479879 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.485498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40eff6fe-74a2-4866-8002-700cebf3efbd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.485545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsz4m\" (UniqueName: \"kubernetes.io/projected/40eff6fe-74a2-4866-8002-700cebf3efbd-kube-api-access-vsz4m\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.485593 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40eff6fe-74a2-4866-8002-700cebf3efbd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.485738 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40eff6fe-74a2-4866-8002-700cebf3efbd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.495786 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.519898 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.532649 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.543362 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.543708 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.545004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.545037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.545049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.545064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.545088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.549935 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/0.log" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.552586 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.552698 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.558014 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.573960 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.583090 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.586527 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40eff6fe-74a2-4866-8002-700cebf3efbd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.586582 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsz4m\" (UniqueName: \"kubernetes.io/projected/40eff6fe-74a2-4866-8002-700cebf3efbd-kube-api-access-vsz4m\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.586656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40eff6fe-74a2-4866-8002-700cebf3efbd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.586683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40eff6fe-74a2-4866-8002-700cebf3efbd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.587521 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40eff6fe-74a2-4866-8002-700cebf3efbd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.587667 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40eff6fe-74a2-4866-8002-700cebf3efbd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.596651 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40eff6fe-74a2-4866-8002-700cebf3efbd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.596724 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.605805 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.612566 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsz4m\" (UniqueName: \"kubernetes.io/projected/40eff6fe-74a2-4866-8002-700cebf3efbd-kube-api-access-vsz4m\") pod \"ovnkube-control-plane-749d76644c-67glt\" (UID: \"40eff6fe-74a2-4866-8002-700cebf3efbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.620901 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.632919 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.643492 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.647206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.647237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.647247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.647263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.647277 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.654203 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.671661 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.681924 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.693937 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.697111 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.705363 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.716970 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.731670 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.744308 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.750292 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.750333 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.750349 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.750371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.750386 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.759182 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: W0121 17:17:10.761923 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40eff6fe_74a2_4866_8002_700cebf3efbd.slice/crio-59664a68ac1e89f41022b569591e7570e8e71218886f7eb3a4a56703d5ab6194 WatchSource:0}: Error finding container 59664a68ac1e89f41022b569591e7570e8e71218886f7eb3a4a56703d5ab6194: Status 404 returned error can't find the container with id 59664a68ac1e89f41022b569591e7570e8e71218886f7eb3a4a56703d5ab6194 Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.773479 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.784312 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.808785 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:09Z\\\",\\\"message\\\":\\\" 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:08.705552 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:08.705557 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:08.705590 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 17:17:08.705601 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 17:17:08.705623 6132 factory.go:656] Stopping watch factory\\\\nI0121 17:17:08.705649 6132 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:08.705650 6132 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:08.705659 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:08.705666 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:08.705680 6132 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 17:17:08.705687 6132 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:08.705700 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:08.705703 6132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:08.705924 6132 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.814787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.814816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.814826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.814841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.814853 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: E0121 17:17:10.827010 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.831958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.832030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.832044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.832064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.832076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: E0121 17:17:10.843565 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.847808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.847845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.847857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.847883 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.847893 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: E0121 17:17:10.861063 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.864650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.864684 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.864692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.864708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.864718 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: E0121 17:17:10.876206 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.881083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.881144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.881155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.881178 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.881210 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:10 crc kubenswrapper[4823]: E0121 17:17:10.899923 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:10Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:10 crc kubenswrapper[4823]: E0121 17:17:10.900504 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.902794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.902918 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.902996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.903079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:10 crc kubenswrapper[4823]: I0121 17:17:10.903161 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:10Z","lastTransitionTime":"2026-01-21T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.007114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.007151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.007162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.007178 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.007189 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.110050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.110092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.110101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.110116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.110126 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.212993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.213036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.213049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.213072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.213091 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.306913 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:06:42.200122972 +0000 UTC Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.316325 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.316371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.316380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.316397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.316407 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.343187 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:11 crc kubenswrapper[4823]: E0121 17:17:11.343387 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.344081 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.344145 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:11 crc kubenswrapper[4823]: E0121 17:17:11.344296 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:11 crc kubenswrapper[4823]: E0121 17:17:11.344478 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.419531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.419587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.419604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.419628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.419647 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.522369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.522414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.522424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.522440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.522454 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.558806 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/1.log" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.559572 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/0.log" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.563117 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7" exitCode=1 Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.563169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.563248 4823 scope.go:117] "RemoveContainer" containerID="ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.563774 4823 scope.go:117] "RemoveContainer" containerID="bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7" Jan 21 17:17:11 crc kubenswrapper[4823]: E0121 17:17:11.564120 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.565058 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" event={"ID":"40eff6fe-74a2-4866-8002-700cebf3efbd","Type":"ContainerStarted","Data":"c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.565484 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" event={"ID":"40eff6fe-74a2-4866-8002-700cebf3efbd","Type":"ContainerStarted","Data":"59664a68ac1e89f41022b569591e7570e8e71218886f7eb3a4a56703d5ab6194"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.583273 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.602114 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.615651 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.625439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.625489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.625504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.625524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.625538 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.631593 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.642667 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.654547 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.662614 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.673519 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.682222 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.692817 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.703277 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.715998 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.726721 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.728105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.728137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.728149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.728166 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.728197 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.737094 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.757570 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:09Z\\\",\\\"message\\\":\\\" 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:08.705552 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:08.705557 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:08.705590 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 17:17:08.705601 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 17:17:08.705623 6132 factory.go:656] Stopping watch factory\\\\nI0121 17:17:08.705649 6132 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:08.705650 6132 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:08.705659 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:08.705666 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:08.705680 6132 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 17:17:08.705687 6132 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:08.705700 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:08.705703 6132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:08.705924 6132 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.831002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.831058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.831073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.831100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.831152 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.860663 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-htjnl"] Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.861094 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:11 crc kubenswrapper[4823]: E0121 17:17:11.861151 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.874718 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.886594 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.896931 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.903227 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.903321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4pfc\" (UniqueName: \"kubernetes.io/projected/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-kube-api-access-s4pfc\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.912645 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.922425 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.933242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.933274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.933284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.933299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.933310 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:11Z","lastTransitionTime":"2026-01-21T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.933608 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.946903 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.960392 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.971477 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:11 crc kubenswrapper[4823]: I0121 17:17:11.990094 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce904b0767ec1c85ff41e10355f4aad5726cb16c4e483ca341c11409e71237a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:09Z\\\",\\\"message\\\":\\\" 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:08.705552 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:08.705557 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:08.705590 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 17:17:08.705601 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 17:17:08.705623 6132 factory.go:656] Stopping watch factory\\\\nI0121 17:17:08.705649 6132 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:08.705650 6132 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:08.705659 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:08.705666 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:08.705680 6132 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 17:17:08.705687 6132 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:08.705700 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:08.705703 6132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:08.705924 6132 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.001507 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.003809 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.003890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4pfc\" (UniqueName: \"kubernetes.io/projected/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-kube-api-access-s4pfc\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:12 crc kubenswrapper[4823]: E0121 17:17:12.004037 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:12 crc kubenswrapper[4823]: E0121 17:17:12.004159 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:12.504135697 +0000 UTC m=+33.430266557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.013976 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.020399 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4pfc\" (UniqueName: \"kubernetes.io/projected/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-kube-api-access-s4pfc\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.027196 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.035266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.035301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.035345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.035360 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.035370 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.042957 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.068132 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.091611 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.137771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.137821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.137831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.137848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.137885 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.240552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.240625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.240640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.240658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.240670 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.307551 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:44:15.181264996 +0000 UTC Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.343319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.343373 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.343386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.343404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.343418 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.446072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.446138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.446155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.446179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.446196 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.508689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:12 crc kubenswrapper[4823]: E0121 17:17:12.509075 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:12 crc kubenswrapper[4823]: E0121 17:17:12.509153 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:13.509134906 +0000 UTC m=+34.435265766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.549031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.549067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.549077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.549094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.549107 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.569680 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/1.log" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.574628 4823 scope.go:117] "RemoveContainer" containerID="bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.574819 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" event={"ID":"40eff6fe-74a2-4866-8002-700cebf3efbd","Type":"ContainerStarted","Data":"6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073"} Jan 21 17:17:12 crc kubenswrapper[4823]: E0121 17:17:12.574975 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.589695 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.602760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.615463 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.627516 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.643054 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.651567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.651624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.651643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.651667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.651686 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.656787 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.677598 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.688345 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.700243 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.711856 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.725545 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.736724 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.750523 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.753967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.754031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.754053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.754079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.754094 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.761911 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.775836 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.789741 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.803887 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.816097 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.829435 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.841493 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.857574 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.857658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.857720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.857742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.857774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.857799 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.870003 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.897750 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.915201 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.932319 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.949857 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.960598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.960637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.960645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.960660 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.960668 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:12Z","lastTransitionTime":"2026-01-21T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.968398 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.982363 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:12 crc kubenswrapper[4823]: I0121 17:17:12.999785 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.014859 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:13Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.032373 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:13Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.047993 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:13Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.063005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.063055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.063069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.063093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.063111 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.165850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.165942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.165960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.165982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.165997 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.170328 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.216186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216360 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:17:29.216336364 +0000 UTC m=+50.142467224 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.216357 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.216456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.216483 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.216557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216603 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216681 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216709 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216759 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216783 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216718 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:29.216710733 +0000 UTC m=+50.142841593 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216900 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:29.216832906 +0000 UTC m=+50.142963806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216940 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:29.216922098 +0000 UTC m=+50.143053048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216975 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.216999 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.217018 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.217137 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:29.217113962 +0000 UTC m=+50.143244903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.269529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.269605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.269630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.269660 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.269683 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.308473 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:56:00.741371258 +0000 UTC Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.342762 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.342822 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.342826 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.342954 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.343160 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.343382 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.343513 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.343698 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.372755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.372824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.372843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.372921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.372941 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.475895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.475987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.476011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.476042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.476065 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.519417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.519720 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.519927 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:15.519894372 +0000 UTC m=+36.446025272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.578082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.578143 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.578153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.578169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.578180 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.580048 4823 scope.go:117] "RemoveContainer" containerID="bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7" Jan 21 17:17:13 crc kubenswrapper[4823]: E0121 17:17:13.580356 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.682032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.682101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.682114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.682141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.682157 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.785126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.785202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.785221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.785247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.785263 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.888308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.888355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.888368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.888385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.888398 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.991415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.991479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.991500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.991529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:13 crc kubenswrapper[4823]: I0121 17:17:13.991553 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:13Z","lastTransitionTime":"2026-01-21T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.095302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.095374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.095396 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.095427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.095448 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.198402 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.198494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.198566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.198598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.198652 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.302508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.302540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.302550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.302566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.302596 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.308972 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:47:54.360693763 +0000 UTC Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.405751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.405823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.405845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.405908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.405929 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.508923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.508965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.508975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.508993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.509003 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.612423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.612491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.612513 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.612549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.612574 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.716948 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.717072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.717098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.717129 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.717150 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.820277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.820331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.820341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.820362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.820376 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.923147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.923187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.923197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.923211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:14 crc kubenswrapper[4823]: I0121 17:17:14.923224 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:14Z","lastTransitionTime":"2026-01-21T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.025958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.026048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.026060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.026079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.026090 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.128873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.128912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.128924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.128940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.128950 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.231892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.231943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.231955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.231972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.231983 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.309754 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:49:30.8274523 +0000 UTC Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.334498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.334599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.334623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.334651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.334673 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.343099 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:15 crc kubenswrapper[4823]: E0121 17:17:15.343255 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.343671 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:15 crc kubenswrapper[4823]: E0121 17:17:15.343784 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.343915 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:15 crc kubenswrapper[4823]: E0121 17:17:15.344067 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.344256 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:15 crc kubenswrapper[4823]: E0121 17:17:15.344355 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.438492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.438568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.438591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.438619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.438640 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.543753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.543844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.543916 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.543952 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.543986 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.546101 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:15 crc kubenswrapper[4823]: E0121 17:17:15.546310 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:15 crc kubenswrapper[4823]: E0121 17:17:15.546428 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:19.546396566 +0000 UTC m=+40.472527456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.647743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.647798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.647810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.647827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.647837 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.749771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.749820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.749833 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.749849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.749882 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.852371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.852423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.852469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.852492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.852506 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.955275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.955331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.955347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.955368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:15 crc kubenswrapper[4823]: I0121 17:17:15.955384 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:15Z","lastTransitionTime":"2026-01-21T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.058365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.058403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.058411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.058428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.058437 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.161836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.161982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.162001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.162032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.162053 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.264681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.264718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.264729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.264741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.264750 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.310618 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:39:47.258764454 +0000 UTC Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.367291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.367354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.367364 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.367386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.367400 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.471280 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.471329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.471340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.471357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.471368 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.574152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.574192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.574203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.574220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.574230 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.677078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.677202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.677215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.677236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.677250 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.779550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.779613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.779623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.779648 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.779659 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.882509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.882575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.882586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.882609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.882624 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.985924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.985973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.985989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.986006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:16 crc kubenswrapper[4823]: I0121 17:17:16.986016 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:16Z","lastTransitionTime":"2026-01-21T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.088900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.088943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.088954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.088971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.088983 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.191776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.191901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.191934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.191963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.191985 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.294896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.294949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.294964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.294981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.294991 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.311403 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:44:47.9849982 +0000 UTC Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.343279 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.343424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:17 crc kubenswrapper[4823]: E0121 17:17:17.343499 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.343490 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:17 crc kubenswrapper[4823]: E0121 17:17:17.343941 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.343976 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:17 crc kubenswrapper[4823]: E0121 17:17:17.343764 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:17 crc kubenswrapper[4823]: E0121 17:17:17.344154 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.397701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.397732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.397742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.397754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.397762 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.500116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.500156 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.500167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.500182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.500192 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.603114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.603169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.603181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.603200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.603213 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.706777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.706837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.706848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.706887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.706900 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.809684 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.809750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.809762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.809785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.809800 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.912258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.912337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.912353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.912376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:17 crc kubenswrapper[4823]: I0121 17:17:17.912399 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:17Z","lastTransitionTime":"2026-01-21T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.015312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.015379 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.015392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.015417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.015435 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.118411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.118485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.118508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.118543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.118562 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.221106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.221179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.221199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.221230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.221254 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.312199 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:51:38.14235664 +0000 UTC Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.324038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.324082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.324092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.324107 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.324117 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.426512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.426567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.426577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.426597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.426609 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.530155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.530200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.530211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.530227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.530238 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.633012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.633065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.633077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.633096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.633112 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.736011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.736093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.736111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.736128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.736139 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.838161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.838212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.838227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.838251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.838271 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.940647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.940695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.940711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.940731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:18 crc kubenswrapper[4823]: I0121 17:17:18.940747 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:18Z","lastTransitionTime":"2026-01-21T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.043182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.043229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.043260 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.043278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.043291 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.146276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.146320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.146330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.146345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.146356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.249572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.249618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.249627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.249645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.249656 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.313306 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:24:08.412181397 +0000 UTC Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.342776 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:19 crc kubenswrapper[4823]: E0121 17:17:19.342999 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.343024 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.343139 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:19 crc kubenswrapper[4823]: E0121 17:17:19.343293 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.343347 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:19 crc kubenswrapper[4823]: E0121 17:17:19.343403 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:19 crc kubenswrapper[4823]: E0121 17:17:19.343450 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.351932 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.351985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.352002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.352025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.352040 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.366107 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.388061 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.402604 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.414658 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.437090 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.451892 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.455152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.455234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.455247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.455265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.455298 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.466539 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.482061 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.496163 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.513979 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.536996 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.551834 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.557864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.557909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.557920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.557937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.557947 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.572744 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.587297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:19 crc kubenswrapper[4823]: E0121 17:17:19.587526 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:19 crc kubenswrapper[4823]: E0121 17:17:19.587644 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:27.587615718 +0000 UTC m=+48.513746608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.587809 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.602308 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.615330 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:19Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.660616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.660658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.660668 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.660682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.660692 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.763950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.764001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.764012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.764028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.764040 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.866164 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.866219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.866232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.866248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.866260 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.969290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.969398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.969414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.969441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:19 crc kubenswrapper[4823]: I0121 17:17:19.969455 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:19Z","lastTransitionTime":"2026-01-21T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.071986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.072030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.072041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.072059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.072070 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.175262 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.175356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.175373 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.175398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.175415 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.278093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.278199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.278223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.278254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.278272 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.313653 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:47:25.677657262 +0000 UTC Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.380456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.380496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.380507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.380524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.380535 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.483573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.483642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.483660 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.483687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.483706 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.586806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.586890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.586908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.586929 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.586942 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.689896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.689986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.689998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.690018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.690031 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.794023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.794106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.794130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.794162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.794202 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.896493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.896568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.896589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.897214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:20 crc kubenswrapper[4823]: I0121 17:17:20.897259 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:20Z","lastTransitionTime":"2026-01-21T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.000761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.000830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.000888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.000920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.000939 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.103843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.103904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.103913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.103926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.104177 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.192445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.192515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.192532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.192555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.192572 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.210627 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:21Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.215207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.215253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.215265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.215287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.215299 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.229507 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:21Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.234098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.234182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.234206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.234236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.234258 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.247830 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:21Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.252040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.252096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.252111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.252133 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.252148 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.266068 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:21Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.270050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.270085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.270097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.270114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.270126 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.288235 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:21Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.288483 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.290474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.290521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.290537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.290612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.290637 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.314140 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:41:04.220773992 +0000 UTC Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.343608 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.343680 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.343702 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.343635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.343793 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.344072 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.344250 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:21 crc kubenswrapper[4823]: E0121 17:17:21.344392 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.394808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.395287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.395448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.395613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.395743 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.499580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.499651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.499674 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.499708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.499731 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.602490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.602535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.602546 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.602563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.602573 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.705715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.705793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.705826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.705891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.705921 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.808714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.808755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.808766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.808782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.808794 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.911140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.911174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.911183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.911196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:21 crc kubenswrapper[4823]: I0121 17:17:21.911207 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:21Z","lastTransitionTime":"2026-01-21T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.013305 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.013365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.013382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.013738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.013781 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.116368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.116409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.116419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.116434 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.116446 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.219186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.219300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.219318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.219341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.219363 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.314784 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:36:15.819396452 +0000 UTC Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.322631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.322666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.322677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.322693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.322713 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.425400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.425458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.425477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.425502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.425519 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.528221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.528274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.528289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.528310 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.528326 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.630457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.630508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.630522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.630543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.630559 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.732964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.733000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.733014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.733031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.733043 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.837073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.837168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.837223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.837248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.837341 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.941570 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.941619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.941632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.941654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:22 crc kubenswrapper[4823]: I0121 17:17:22.941665 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:22Z","lastTransitionTime":"2026-01-21T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.044078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.044267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.044283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.044355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.044369 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.147839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.147949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.147971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.148002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.148026 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.250817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.250877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.250886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.250899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.250907 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.315387 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:29:38.087635418 +0000 UTC Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.343082 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.343219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:23 crc kubenswrapper[4823]: E0121 17:17:23.343340 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.343749 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.343803 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:23 crc kubenswrapper[4823]: E0121 17:17:23.343937 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:23 crc kubenswrapper[4823]: E0121 17:17:23.344104 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:23 crc kubenswrapper[4823]: E0121 17:17:23.344292 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.353485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.353534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.353545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.353556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.353566 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.456213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.456266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.456276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.456290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.456300 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.558512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.558569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.558581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.558599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.558612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.661396 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.661465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.661488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.661515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.661538 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.767849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.768500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.768748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.768947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.769154 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.872267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.872316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.872354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.872374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.872389 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.974579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.974908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.974996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.975106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:23 crc kubenswrapper[4823]: I0121 17:17:23.975182 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:23Z","lastTransitionTime":"2026-01-21T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.078225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.078526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.078596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.078660 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.078721 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.182008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.182073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.182092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.182116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.182133 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.285164 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.285217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.285229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.285248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.285261 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.316224 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:04:40.15901833 +0000 UTC Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.388681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.388728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.388747 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.388774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.388794 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.491475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.491893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.492047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.492217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.492350 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.594999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.595032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.595041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.595056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.595066 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.698571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.698882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.699014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.699160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.699284 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.801701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.801741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.801753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.801768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.801778 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.905023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.905074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.905092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.905114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:24 crc kubenswrapper[4823]: I0121 17:17:24.905130 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:24Z","lastTransitionTime":"2026-01-21T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.008383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.008445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.008464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.008489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.008506 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.111635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.111693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.111710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.111733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.111750 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.215103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.215135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.215145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.215160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.215171 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.316391 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:49:19.441767423 +0000 UTC Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.317255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.317386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.317476 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.317544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.317624 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.343361 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.343447 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:25 crc kubenswrapper[4823]: E0121 17:17:25.343554 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.343383 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:25 crc kubenswrapper[4823]: E0121 17:17:25.343786 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:25 crc kubenswrapper[4823]: E0121 17:17:25.344054 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.344225 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:25 crc kubenswrapper[4823]: E0121 17:17:25.344405 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.420824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.420928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.420946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.420971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.420988 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.524435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.524519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.524539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.524564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.524580 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.627046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.627125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.627150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.627177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.627194 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.730669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.730751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.730792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.730827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.730848 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.833552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.833595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.833605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.833636 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.833648 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.936499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.936714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.936753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.936779 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:25 crc kubenswrapper[4823]: I0121 17:17:25.936802 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:25Z","lastTransitionTime":"2026-01-21T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.040517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.040584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.040598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.040622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.040637 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.142998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.143060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.143073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.143100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.143119 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.246957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.247024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.247043 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.247068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.247086 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.316931 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:11:57.748330551 +0000 UTC Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.344949 4823 scope.go:117] "RemoveContainer" containerID="bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.348739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.348782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.348794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.348812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.348823 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.451370 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.451748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.451770 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.451795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.451814 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.555345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.555399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.555418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.555435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.555447 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.627988 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/1.log" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.631043 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.631579 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.652103 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.658322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.658350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.658359 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.658376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.658387 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.668903 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.683567 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.698351 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.715142 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.732273 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.745169 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.760020 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.760984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.761031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.761043 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.761063 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.761267 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.769671 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.781746 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.792912 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.804046 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.814654 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.825039 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.832988 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.858146 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:26Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.863936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.863983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.863994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.864011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.864022 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.966443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.966480 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.966487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.966501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:26 crc kubenswrapper[4823]: I0121 17:17:26.966509 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:26Z","lastTransitionTime":"2026-01-21T17:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.076784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.077202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.077407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.077585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.077755 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.180897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.180962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.180973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.180997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.181011 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.189532 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.198407 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.203086 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.214313 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.225157 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.236969 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.247975 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.262291 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.274391 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.284232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.284269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.284281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.284301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.284315 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.285877 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.298323 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.314786 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.317396 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:35:03.285069747 +0000 UTC Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.330714 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.343325 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.343410 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.343477 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.343325 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.343592 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.343347 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.343694 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.343777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.352415 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.384197 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.386431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.386470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.386485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.386508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.386523 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.404713 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.427536 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.442502 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.489597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.489638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.489649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.489667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.489680 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.592159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.592215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.592231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.592258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.592269 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.636917 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/2.log" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.637718 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/1.log" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.641483 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6" exitCode=1 Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.641557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.641620 4823 scope.go:117] "RemoveContainer" containerID="bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.642604 4823 scope.go:117] "RemoveContainer" containerID="acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6" Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.642828 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.666506 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.675322 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.675444 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:27 crc kubenswrapper[4823]: E0121 17:17:27.675489 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:17:43.675476703 +0000 UTC m=+64.601607563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.678004 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.688945 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.694999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.695045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.695056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.695073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.695087 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.708143 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.719843 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.731132 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.741970 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.753974 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.767810 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.780348 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.798185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.798270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.798289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.798317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.798339 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.799740 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.822587 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcdfa9115670342af4a043e5ffecfc9611196c7d1a8550b7ced159f0a95ed2c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"11.021943 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 17:17:11.021976 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 17:17:11.022001 6246 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 17:17:11.022033 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 17:17:11.022043 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 17:17:11.022074 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 17:17:11.022248 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:11.022267 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 17:17:11.022269 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 17:17:11.022266 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 17:17:11.022346 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 17:17:11.022387 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 17:17:11.022419 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 17:17:11.022454 6246 factory.go:656] Stopping watch factory\\\\nI0121 17:17:11.022484 6246 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.834871 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.849741 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.862004 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.874062 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.884478 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:27Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.901422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.901471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.901481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.901493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:27 crc kubenswrapper[4823]: I0121 17:17:27.901502 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:27Z","lastTransitionTime":"2026-01-21T17:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.004539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.004612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.004645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.004674 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.004694 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.108081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.108194 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.108226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.108255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.108275 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.211245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.211315 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.211327 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.211345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.211362 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.313879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.313928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.313941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.313958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.313971 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.318174 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:56:52.995777154 +0000 UTC Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.415998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.416031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.416042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.416055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.416064 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.517894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.517943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.517954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.517971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.517984 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.621343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.621408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.621426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.621450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.621468 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.648744 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/2.log" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.654818 4823 scope.go:117] "RemoveContainer" containerID="acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6" Jan 21 17:17:28 crc kubenswrapper[4823]: E0121 17:17:28.655121 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.675513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.689343 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.705537 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.719248 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.724036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.724072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.724085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.724103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.724118 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.737238 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.750549 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.779023 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.796260 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.810698 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.825240 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.826965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.826994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.827002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.827018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.827028 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.840191 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.852464 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.866961 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.881003 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.893454 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.909641 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.922567 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:28Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.930528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.930631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.930653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.930685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:28 crc kubenswrapper[4823]: I0121 17:17:28.930707 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:28Z","lastTransitionTime":"2026-01-21T17:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.033955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.034002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.034014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.034042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.034054 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.137391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.137479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.137493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.137511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.137522 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.240047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.240118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.240140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.240165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.240184 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.296283 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.296407 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.296431 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.296464 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.296484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.296614 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.296632 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.296643 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.296684 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:18:01.296672468 +0000 UTC m=+82.222803328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297092 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297093 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:01.297077407 +0000 UTC m=+82.223208267 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297150 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:18:01.297136139 +0000 UTC m=+82.223266999 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297167 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297226 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297247 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297167 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297323 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:18:01.297296753 +0000 UTC m=+82.223427643 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.297380 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:18:01.297369175 +0000 UTC m=+82.223500165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.318930 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:49:42.825836407 +0000 UTC Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.342654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.342804 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.343621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.344668 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.344917 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.345128 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.345270 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:29 crc kubenswrapper[4823]: E0121 17:17:29.345624 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.346128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.346237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.346323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.346403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.346478 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.360356 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.377580 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.393723 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.411535 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.426412 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.442782 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.449789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.449819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.449831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.449848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.449888 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.456674 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.472445 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.485692 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.501966 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.518575 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.535423 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.549982 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.552589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.552643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.552662 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.552685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.552700 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.563485 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.591722 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.606437 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.621165 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:29Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.654634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.654671 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.654679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.654692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.654703 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.757545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.757733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.757765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.757796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.757818 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.861301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.861414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.861434 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.861458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.861477 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.964339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.964501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.964522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.964545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:29 crc kubenswrapper[4823]: I0121 17:17:29.964561 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:29Z","lastTransitionTime":"2026-01-21T17:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.068123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.068175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.068187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.068205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.068217 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.171165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.171223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.171240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.171268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.171286 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.274428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.274497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.274517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.274542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.274560 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.319829 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:46:20.389796165 +0000 UTC Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.377739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.377807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.377824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.377845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.377885 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.481413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.481449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.481460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.481475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.481486 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.583513 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.583556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.583600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.583616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.583627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.686076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.686152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.686173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.686201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.686223 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.788567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.788611 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.788624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.788642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.788690 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.890623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.890663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.890672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.890686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.890697 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.992627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.992665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.992675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.992691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:30 crc kubenswrapper[4823]: I0121 17:17:30.992701 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:30Z","lastTransitionTime":"2026-01-21T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.095006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.095070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.095081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.095095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.095104 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.198030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.198109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.198130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.198190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.198211 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.300995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.301031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.301044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.301060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.301071 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.320090 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:33:43.124345468 +0000 UTC Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.342905 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.342972 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.343095 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.343155 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.343237 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.343272 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.343336 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.343479 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.404209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.404294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.404306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.404347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.404360 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.457441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.457497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.457506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.457522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.457530 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.471534 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:31Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.476052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.476110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.476130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.476157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.476179 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.493275 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:31Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.496742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.496793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.496811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.496834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.496878 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.509391 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:31Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.513635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.513676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.513685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.513701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.513712 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.531256 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:31Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.540009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.540084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.540094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.540129 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.540141 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.555835 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:31Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:31 crc kubenswrapper[4823]: E0121 17:17:31.556001 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.557593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.557677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.557688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.557707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.557718 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.659772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.659818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.659828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.659844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.659870 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.765365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.765456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.765477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.765506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.765532 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.867893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.867934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.867946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.867962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.867973 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.969989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.970022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.970033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.970049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:31 crc kubenswrapper[4823]: I0121 17:17:31.970061 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:31Z","lastTransitionTime":"2026-01-21T17:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.072930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.073664 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.073757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.073895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.073994 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.176349 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.176704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.177031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.177337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.177629 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.280783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.281251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.281401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.281529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.281669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.320206 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:48:02.263741513 +0000 UTC Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.384638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.384677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.384688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.384705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.384715 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.487596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.487636 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.487653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.487668 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.487677 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.590830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.590920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.590938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.590966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.590985 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.694529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.694606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.694626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.694652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.694669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.797050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.797145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.797163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.797188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.797206 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.901154 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.901912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.901962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.901987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:32 crc kubenswrapper[4823]: I0121 17:17:32.901998 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:32Z","lastTransitionTime":"2026-01-21T17:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.005256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.005300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.005308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.005323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.005333 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.109372 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.109440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.109458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.109482 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.109505 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.212778 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.212832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.212885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.212915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.212933 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.316204 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.316273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.316294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.316322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.316346 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.321576 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:16:15.008474659 +0000 UTC Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.343198 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:33 crc kubenswrapper[4823]: E0121 17:17:33.343413 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.343749 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:33 crc kubenswrapper[4823]: E0121 17:17:33.343926 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.344212 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:33 crc kubenswrapper[4823]: E0121 17:17:33.344490 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.344573 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:33 crc kubenswrapper[4823]: E0121 17:17:33.344772 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.419123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.419220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.419240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.419295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.419316 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.521718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.521799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.521822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.521886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.521912 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.624224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.624584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.624597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.624616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.624631 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.727355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.727413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.727428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.727451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.727464 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.829657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.829724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.829745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.829773 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.829793 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.932510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.932584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.932602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.932628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:33 crc kubenswrapper[4823]: I0121 17:17:33.932646 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:33Z","lastTransitionTime":"2026-01-21T17:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.036119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.036209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.036233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.036262 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.036283 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.138775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.138835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.138879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.138904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.138943 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.245778 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.245910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.245953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.245985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.246007 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.322217 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:54:02.399059118 +0000 UTC Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.349266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.349356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.349399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.349431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.349455 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.452252 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.452311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.452329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.452355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.452371 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.556081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.556123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.556132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.556150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.556161 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.659698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.659762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.659780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.659806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.659826 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.763362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.763428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.763445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.763469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.763487 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.866708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.866772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.866794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.866823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.866844 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.970072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.970115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.970128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.970144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:34 crc kubenswrapper[4823]: I0121 17:17:34.970156 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:34Z","lastTransitionTime":"2026-01-21T17:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.073438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.073481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.073492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.073506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.073517 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.176915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.176975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.176983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.177003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.177014 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.279947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.280017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.280036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.280067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.280085 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.322943 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:23:16.392433889 +0000 UTC Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.343392 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.343436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.343400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:35 crc kubenswrapper[4823]: E0121 17:17:35.343626 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.343720 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:35 crc kubenswrapper[4823]: E0121 17:17:35.343844 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:35 crc kubenswrapper[4823]: E0121 17:17:35.343955 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:35 crc kubenswrapper[4823]: E0121 17:17:35.344065 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.382846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.382931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.382951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.382972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.382983 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.485327 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.485384 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.485394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.485415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.485427 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.588367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.588420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.588435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.588452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.588465 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.691350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.691420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.691431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.691469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.691484 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.795246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.795325 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.795347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.795371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.795401 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.897596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.897645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.897658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.897676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:35 crc kubenswrapper[4823]: I0121 17:17:35.897690 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:35Z","lastTransitionTime":"2026-01-21T17:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.000586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.000652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.000670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.000695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.000713 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.103749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.103815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.103833 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.103893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.103918 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.207537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.207610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.207627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.207656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.207675 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.310334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.310397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.310409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.310427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.310439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.323781 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:18:10.275506603 +0000 UTC Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.413070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.413114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.413125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.413141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.413154 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.515655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.515730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.515748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.515773 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.515792 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.619464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.619543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.619556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.619575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.619587 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.722163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.722289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.722313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.722340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.722360 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.825483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.825542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.825565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.825595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.825617 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.928071 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.928158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.928177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.928211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:36 crc kubenswrapper[4823]: I0121 17:17:36.928231 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:36Z","lastTransitionTime":"2026-01-21T17:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.031155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.031212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.031265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.031290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.031307 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.134481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.134572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.134597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.134629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.134669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.237156 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.237220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.237235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.237255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.237270 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.324741 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:07:37.160312158 +0000 UTC Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.340414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.340478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.340491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.340514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.340527 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.342700 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.342795 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.342799 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.342732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:37 crc kubenswrapper[4823]: E0121 17:17:37.342918 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:37 crc kubenswrapper[4823]: E0121 17:17:37.343043 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:37 crc kubenswrapper[4823]: E0121 17:17:37.343176 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:37 crc kubenswrapper[4823]: E0121 17:17:37.343292 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.444424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.444497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.444519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.444544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.444560 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.547384 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.547423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.547432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.547448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.547459 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.652109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.652200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.652233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.652263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.652284 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.756248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.756330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.756366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.756396 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.756416 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.859614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.859675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.859699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.859730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.859752 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.962967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.963016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.963027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.963044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:37 crc kubenswrapper[4823]: I0121 17:17:37.963056 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:37Z","lastTransitionTime":"2026-01-21T17:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.066703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.066772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.066797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.066826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.066845 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.169935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.170002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.170023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.170050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.170069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.272180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.272222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.272233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.272248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.272260 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.325374 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:27:15.991932237 +0000 UTC Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.375592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.375649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.375666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.375692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.375708 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.478966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.479026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.479042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.479066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.479084 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.581703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.581772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.581793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.581815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.581831 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.684849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.685123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.685148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.685175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.685195 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.787580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.787633 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.787642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.787657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.787685 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.890521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.890791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.890816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.890884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.890914 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.994872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.994910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.994919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.994933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:38 crc kubenswrapper[4823]: I0121 17:17:38.994943 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:38Z","lastTransitionTime":"2026-01-21T17:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.097780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.097896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.097928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.097959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.097982 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.201358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.201426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.201447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.201476 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.201499 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.304322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.304459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.304488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.304522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.304546 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.326075 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:07:31.064249443 +0000 UTC Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.342627 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.342720 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:39 crc kubenswrapper[4823]: E0121 17:17:39.342999 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.343095 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:39 crc kubenswrapper[4823]: E0121 17:17:39.343179 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:39 crc kubenswrapper[4823]: E0121 17:17:39.343258 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.343931 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:39 crc kubenswrapper[4823]: E0121 17:17:39.344011 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.359756 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.373501 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.385208 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.397908 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.407644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.407689 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.407705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.407725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.407738 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.410442 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.421601 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.455711 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.470878 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.481611 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.494194 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.505160 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.512423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.512456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.512468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.512484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.512496 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.515742 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.530572 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.541193 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.552937 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.565206 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.576974 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:39Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.615429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.615497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.615520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.615548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.615574 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.718345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.718385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.718395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.718411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.718421 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.821197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.821246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.821260 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.821279 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.821294 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.924800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.924848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.924876 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.924893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:39 crc kubenswrapper[4823]: I0121 17:17:39.924905 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:39Z","lastTransitionTime":"2026-01-21T17:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.027490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.027551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.027566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.027593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.027610 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.130645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.130711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.130735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.130767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.130787 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.233161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.233202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.233213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.233228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.233241 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.326936 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:52:58.409825518 +0000 UTC Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.335724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.335761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.335771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.335786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.335796 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.438750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.438794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.438806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.438824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.438838 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.540651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.540686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.540694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.540709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.540720 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.643838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.643946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.643971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.644005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.644028 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.747037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.747100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.747118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.747142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.747160 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.850209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.850277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.850294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.850312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.850333 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.953042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.953084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.953094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.953109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:40 crc kubenswrapper[4823]: I0121 17:17:40.953120 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:40Z","lastTransitionTime":"2026-01-21T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.055659 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.055697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.055705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.055718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.055729 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.157972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.158028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.158037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.158050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.158100 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.260937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.261033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.261046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.261066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.261078 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.327199 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:47:46.383038686 +0000 UTC Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.342689 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.342703 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.342966 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.342908 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.342703 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.343049 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.343091 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.343152 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.363027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.363072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.363083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.363100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.363114 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.466468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.466538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.466562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.466593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.466619 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.569664 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.569722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.569738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.569762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.569778 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.664848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.664935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.664946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.664959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.664968 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.682071 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:41Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.686031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.686068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.686076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.686088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.686096 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.700601 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:41Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.705946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.706015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.706032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.706055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.706071 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.723170 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:41Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.727700 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.727769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.727831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.727905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.727930 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.752720 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:41Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.758544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.758580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.758595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.758615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.758630 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.770943 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:41Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:41 crc kubenswrapper[4823]: E0121 17:17:41.771211 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.772907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.772976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.772992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.773009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.773053 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.875848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.875925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.875941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.875963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.875975 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.979154 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.979237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.979289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.979314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:41 crc kubenswrapper[4823]: I0121 17:17:41.979331 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:41Z","lastTransitionTime":"2026-01-21T17:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.081789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.081890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.081909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.081935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.081953 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.184456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.184516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.184528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.184544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.184556 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.287551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.287613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.287632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.287661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.287677 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.327521 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:04:27.808879843 +0000 UTC Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.343768 4823 scope.go:117] "RemoveContainer" containerID="acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6" Jan 21 17:17:42 crc kubenswrapper[4823]: E0121 17:17:42.344077 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.390134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.390207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.390224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.390247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.390264 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.493004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.493046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.493054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.493067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.493074 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.598943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.598993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.599009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.599038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.599060 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.701473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.701509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.701517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.701530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.701538 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.803680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.803717 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.803726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.803742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.803753 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.906091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.906217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.906229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.906245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:42 crc kubenswrapper[4823]: I0121 17:17:42.906257 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:42Z","lastTransitionTime":"2026-01-21T17:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.008597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.008642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.008653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.008667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.008675 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.111334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.111376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.111387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.111403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.111414 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.214266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.214314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.214332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.214346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.214355 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.316760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.316805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.316839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.316864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.316874 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.328175 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:54:57.086016697 +0000 UTC Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.343547 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.343597 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.343683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:43 crc kubenswrapper[4823]: E0121 17:17:43.343690 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.343737 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:43 crc kubenswrapper[4823]: E0121 17:17:43.343946 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:43 crc kubenswrapper[4823]: E0121 17:17:43.343992 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:43 crc kubenswrapper[4823]: E0121 17:17:43.344061 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.419729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.419788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.419797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.419814 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.419825 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.522153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.522215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.522232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.522247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.522259 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.624362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.624392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.624400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.624416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.624427 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.726597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.726651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.726663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.726680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.726693 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.756082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:43 crc kubenswrapper[4823]: E0121 17:17:43.756274 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:43 crc kubenswrapper[4823]: E0121 17:17:43.756374 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:18:15.756349741 +0000 UTC m=+96.682480671 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.829307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.829354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.829367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.829388 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.829400 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.931797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.931843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.931870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.931887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:43 crc kubenswrapper[4823]: I0121 17:17:43.931898 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:43Z","lastTransitionTime":"2026-01-21T17:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.033961 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.034009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.034021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.034037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.034048 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.136253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.136296 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.136306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.136323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.136334 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.239430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.239501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.239515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.239531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.239541 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.328798 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:17:34.528737911 +0000 UTC Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.341617 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.341663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.341676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.341694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.341705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.443827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.443882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.443891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.443904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.443912 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.546447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.546495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.546507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.546524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.546537 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.649421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.649459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.649468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.649483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.649493 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.752106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.752146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.752155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.752172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.752182 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.855168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.855231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.855247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.855270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.855287 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.957087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.957135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.957145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.957160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:44 crc kubenswrapper[4823]: I0121 17:17:44.957170 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:44Z","lastTransitionTime":"2026-01-21T17:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.059769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.059802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.059812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.059827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.059837 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.162016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.162066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.162075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.162093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.162105 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.263808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.263843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.263878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.263893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.263904 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.329681 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:20:34.930901274 +0000 UTC Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.343987 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.344029 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.344138 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.344203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:45 crc kubenswrapper[4823]: E0121 17:17:45.344280 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:45 crc kubenswrapper[4823]: E0121 17:17:45.344426 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:45 crc kubenswrapper[4823]: E0121 17:17:45.344554 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:45 crc kubenswrapper[4823]: E0121 17:17:45.344933 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.356610 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.365749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.365781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.365788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.365801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.365810 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.468237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.468298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.468318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.468336 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.468349 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.570960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.570999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.571008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.571022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.571031 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.673418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.673480 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.673501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.673533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.673555 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.712893 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/0.log" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.712947 4823 generic.go:334] "Generic (PLEG): container finished" podID="ea8699bd-e53a-443e-b2e5-0fe577f2c19f" containerID="788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5" exitCode=1 Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.712987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerDied","Data":"788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.714716 4823 scope.go:117] "RemoveContainer" containerID="788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.724046 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d32b4b0e-e14f-475f-952a-f032f5161aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.738063 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.749086 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.760899 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.771400 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.775067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.775095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.775104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.775118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.775126 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.781287 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.790938 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.802821 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.812727 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.822513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.838494 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.852138 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"2026-01-21T17:16:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417\\\\n2026-01-21T17:16:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417 to /host/opt/cni/bin/\\\\n2026-01-21T17:16:59Z [verbose] multus-daemon started\\\\n2026-01-21T17:16:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T17:17:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.864279 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.875158 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.876661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.876710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.876723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.876739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.876751 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.889555 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.899790 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.907936 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.944532 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:45Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.978881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.979122 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.979191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.979266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:45 crc kubenswrapper[4823]: I0121 17:17:45.979341 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:45Z","lastTransitionTime":"2026-01-21T17:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.081217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.081257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.081266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.081280 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.081289 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.183837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.183900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.183911 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.183928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.183939 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.286472 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.286800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.286946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.287042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.287139 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.330275 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:14:00.862439751 +0000 UTC Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.389653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.389694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.389707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.389724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.389737 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.492450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.492514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.492529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.492575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.492588 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.594715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.594764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.594774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.594790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.594802 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.697806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.698236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.698403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.698539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.698678 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.717635 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/0.log" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.717737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerStarted","Data":"49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.730467 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.743508 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.759366 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.777055 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.787498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d32b4b0e-e14f-475f-952a-f032f5161aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.800663 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.802091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.802126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.802135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.802150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.802158 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.813232 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.825195 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.840122 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.850289 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.861290 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.872439 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.881016 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.897718 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.904393 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.904611 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.904902 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.905022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.905118 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:46Z","lastTransitionTime":"2026-01-21T17:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.908047 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.921474 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"2026-01-21T17:16:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417\\\\n2026-01-21T17:16:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417 to /host/opt/cni/bin/\\\\n2026-01-21T17:16:59Z [verbose] multus-daemon started\\\\n2026-01-21T17:16:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T17:17:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.936223 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:46 crc kubenswrapper[4823]: I0121 17:17:46.947960 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:46Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.008404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.008469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.008488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.008511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.008528 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.111435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.111475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.111487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.111505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.111519 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.214502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.215157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.215316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.215475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.215611 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.318891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.319161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.319230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.319297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.319369 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.331239 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:46:36.278272414 +0000 UTC Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.343682 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.343764 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.343775 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.343803 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:47 crc kubenswrapper[4823]: E0121 17:17:47.344125 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:47 crc kubenswrapper[4823]: E0121 17:17:47.344304 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:47 crc kubenswrapper[4823]: E0121 17:17:47.344456 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:47 crc kubenswrapper[4823]: E0121 17:17:47.344514 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.422031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.422070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.422078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.422094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.422104 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.525535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.525926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.526013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.526087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.526163 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.628057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.628467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.628543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.628630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.628697 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.731762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.732279 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.732381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.732492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.732576 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.835674 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.835727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.835739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.835759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.835770 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.938107 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.938468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.938554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.938639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:47 crc kubenswrapper[4823]: I0121 17:17:47.938720 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:47Z","lastTransitionTime":"2026-01-21T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.041359 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.041430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.041446 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.041489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.041512 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.144546 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.144610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.144623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.144651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.144665 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.247997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.248061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.248074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.248096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.248112 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.332332 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:18:21.894702668 +0000 UTC Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.350712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.350776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.350789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.350810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.350825 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.453469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.453512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.453530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.453558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.453576 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.555537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.555598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.555608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.555623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.555632 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.658771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.658882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.658899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.658919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.658930 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.762169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.762260 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.762289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.762320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.762343 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.865545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.865595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.865605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.865629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.865643 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.967642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.967686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.967698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.967714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:48 crc kubenswrapper[4823]: I0121 17:17:48.967726 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:48Z","lastTransitionTime":"2026-01-21T17:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.070803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.070889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.070903 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.070927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.070959 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.172723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.172759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.172769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.172801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.172812 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.275407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.275438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.275448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.275464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.275474 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.333290 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:23:08.878040374 +0000 UTC Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.342559 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.342612 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.342620 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.342720 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:49 crc kubenswrapper[4823]: E0121 17:17:49.342717 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:49 crc kubenswrapper[4823]: E0121 17:17:49.342816 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:49 crc kubenswrapper[4823]: E0121 17:17:49.342966 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:49 crc kubenswrapper[4823]: E0121 17:17:49.343051 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.353648 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d32b4b0e-e14f-475f-952a-f032f5161aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.364796 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.375670 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.377342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.377379 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.377391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.377407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.377418 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.387595 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.398555 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.408182 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.420359 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.430464 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.440626 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.456077 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.465111 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.478014 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"2026-01-21T17:16:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417\\\\n2026-01-21T17:16:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417 to /host/opt/cni/bin/\\\\n2026-01-21T17:16:59Z [verbose] multus-daemon started\\\\n2026-01-21T17:16:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T17:17:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.479650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.479687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.479698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.479714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.479727 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.491370 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.503338 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.521931 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.539380 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.552417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.580402 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:49Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.581926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.582003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.582018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.582046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.582059 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.684568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.684650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.684665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.684691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.684705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.787844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.787917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.787933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.787953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.787969 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.890756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.890816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.890827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.890864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.890878 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.994173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.994228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.994239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.994257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:49 crc kubenswrapper[4823]: I0121 17:17:49.994269 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:49Z","lastTransitionTime":"2026-01-21T17:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.097537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.097594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.097610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.097634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.097647 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.200263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.200310 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.200321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.200340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.200352 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.302425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.302484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.302510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.302532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.302547 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.334353 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:46:11.002867587 +0000 UTC Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.404983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.405027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.405038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.405054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.405066 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.508322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.508367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.508375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.508410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.508419 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.611698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.611742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.611757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.611774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.611785 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.713992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.714033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.714046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.714061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.714073 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.816491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.816532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.816541 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.816555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.816565 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.918949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.918992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.919001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.919015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:50 crc kubenswrapper[4823]: I0121 17:17:50.919023 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:50Z","lastTransitionTime":"2026-01-21T17:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.021566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.021614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.021625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.021644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.021656 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.123644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.123720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.123735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.123758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.123774 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.226207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.226252 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.226264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.226283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.226295 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.328904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.328931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.328940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.328953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.328963 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.334825 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:09:01.861345322 +0000 UTC Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.345069 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:51 crc kubenswrapper[4823]: E0121 17:17:51.345230 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.345488 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:51 crc kubenswrapper[4823]: E0121 17:17:51.345597 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.345798 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:51 crc kubenswrapper[4823]: E0121 17:17:51.345969 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.345989 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:51 crc kubenswrapper[4823]: E0121 17:17:51.346105 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.431017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.431076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.431092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.431116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.431135 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.533043 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.533090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.533101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.533118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.533131 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.635782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.635821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.635865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.635881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.635892 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.738384 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.738421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.738431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.738446 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.738456 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.840243 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.840281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.840291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.840306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.840316 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.942613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.942646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.942654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.942669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.942677 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.994090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.994174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.994199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.994276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:51 crc kubenswrapper[4823]: I0121 17:17:51.994364 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:51Z","lastTransitionTime":"2026-01-21T17:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: E0121 17:17:52.007044 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:52Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.010678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.010734 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.010747 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.010763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.011170 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: E0121 17:17:52.029569 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:52Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.032603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.032646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.032659 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.032676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.032687 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: E0121 17:17:52.044516 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:52Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.047704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.047744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.047757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.047774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.047786 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: E0121 17:17:52.060395 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:52Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.063408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.063450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.063462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.063478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.063491 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: E0121 17:17:52.076203 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"91025cce-7a80-4c4c-9dde-4315071ed327\\\",\\\"systemUUID\\\":\\\"b2b8fe66-0f89-498e-96c2-0d424acf77a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:52Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:52 crc kubenswrapper[4823]: E0121 17:17:52.076313 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.077549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.077579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.077589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.077605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.077618 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.180420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.180488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.180505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.180530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.180547 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.283450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.283497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.283509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.283527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.283539 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.334908 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:35:00.003564142 +0000 UTC Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.386064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.386122 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.386133 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.386146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.386156 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.489273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.489329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.489351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.489381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.489409 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.592079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.592116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.592128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.592146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.592160 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.694930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.694968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.694979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.694995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.695009 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.798079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.798157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.798173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.798191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.798204 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.901089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.901176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.901201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.901231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:52 crc kubenswrapper[4823]: I0121 17:17:52.901254 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:52Z","lastTransitionTime":"2026-01-21T17:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.003786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.003850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.003889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.003908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.003920 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.106273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.106312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.106321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.106338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.106347 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.208635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.208674 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.208686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.208701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.208709 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.311516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.311567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.311586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.311610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.311626 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.335999 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:53:42.447787121 +0000 UTC Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.343381 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.343438 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.343475 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.343385 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:53 crc kubenswrapper[4823]: E0121 17:17:53.343641 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:53 crc kubenswrapper[4823]: E0121 17:17:53.343542 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:53 crc kubenswrapper[4823]: E0121 17:17:53.343736 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:53 crc kubenswrapper[4823]: E0121 17:17:53.343980 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.414617 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.414714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.414735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.414761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.414781 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.517786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.517912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.517937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.517965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.517991 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.621127 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.621239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.621263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.621290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.621310 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.723322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.723398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.723417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.723442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.723459 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.826309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.826359 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.826372 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.826390 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.826401 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.928588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.928641 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.928655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.928672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:53 crc kubenswrapper[4823]: I0121 17:17:53.928685 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:53Z","lastTransitionTime":"2026-01-21T17:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.031934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.031992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.032014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.032039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.032058 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:54Z","lastTransitionTime":"2026-01-21T17:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.134198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.134237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.134245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.134258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.134267 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:54Z","lastTransitionTime":"2026-01-21T17:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.238126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.238183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.238198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.238217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.238236 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:54Z","lastTransitionTime":"2026-01-21T17:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.336758 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:39:30.755832235 +0000 UTC Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.341144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.341190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.341225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.341241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.341253 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:54Z","lastTransitionTime":"2026-01-21T17:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:54 crc kubenswrapper[4823]: I0121 17:17:54.343694 4823 scope.go:117] "RemoveContainer" containerID="acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.206679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.206716 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.206726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.206739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.206747 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.310355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.310494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.310508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.310525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.310538 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.337350 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:19:08.034126192 +0000 UTC Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.342680 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.342747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:55 crc kubenswrapper[4823]: E0121 17:17:55.342837 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.342680 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:55 crc kubenswrapper[4823]: E0121 17:17:55.343016 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:55 crc kubenswrapper[4823]: E0121 17:17:55.343155 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.343403 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:55 crc kubenswrapper[4823]: E0121 17:17:55.343570 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.413731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.413785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.413800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.413821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.413834 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.516715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.516762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.516777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.516798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.516813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.619255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.619315 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.619335 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.619363 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.619385 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.722593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.722967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.723176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.723333 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.723475 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.826355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.826440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.826465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.826495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.826517 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.929126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.929188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.929206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.929230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:55 crc kubenswrapper[4823]: I0121 17:17:55.929249 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:55Z","lastTransitionTime":"2026-01-21T17:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.031820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.031906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.031924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.031950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.031968 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.135112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.135157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.135173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.135196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.135215 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.212688 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/2.log" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.215431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.215883 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.232106 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"2026-01-21T17:16:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417\\\\n2026-01-21T17:16:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417 to /host/opt/cni/bin/\\\\n2026-01-21T17:16:59Z [verbose] multus-daemon started\\\\n2026-01-21T17:16:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T17:17:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.237357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.237394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.237404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.237421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.237432 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.246157 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.256697 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.268317 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.278677 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.288734 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.305677 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.314666 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d32b4b0e-e14f-475f-952a-f032f5161aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.328756 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.338167 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:11:12.769160931 +0000 UTC Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.340841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.340906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.340920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.340938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.340951 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.343069 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.358267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.373168 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.385698 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.400198 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.426014 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.435775 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.444034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.444061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.444071 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.444084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.444093 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.448267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.456373 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:56Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.546696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.546730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.546741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.546758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.546769 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.649593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.649678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.649697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.649731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.649754 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.752959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.753025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.753051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.753084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.753107 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.856105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.856158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.856180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.856201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.856215 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.959527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.959589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.959611 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.959639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:56 crc kubenswrapper[4823]: I0121 17:17:56.959659 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:56Z","lastTransitionTime":"2026-01-21T17:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.062001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.062032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.062041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.062056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.062067 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.165711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.165769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.165780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.165797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.165810 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.267976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.268015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.268026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.268042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.268052 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.339221 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:23:39.768981253 +0000 UTC Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.343609 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.343677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.343621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.343677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:57 crc kubenswrapper[4823]: E0121 17:17:57.343832 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:57 crc kubenswrapper[4823]: E0121 17:17:57.343950 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:57 crc kubenswrapper[4823]: E0121 17:17:57.344178 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:57 crc kubenswrapper[4823]: E0121 17:17:57.344280 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.370728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.371120 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.371327 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.371577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.371745 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.474391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.474433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.474443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.474460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.474471 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.577914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.577948 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.577958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.577974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.577984 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.681003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.681041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.681051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.681068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.681079 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.784416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.784495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.784514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.784943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.784991 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.888406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.888483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.888531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.888561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.888583 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.991940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.991984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.991996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.992014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:57 crc kubenswrapper[4823]: I0121 17:17:57.992028 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:57Z","lastTransitionTime":"2026-01-21T17:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.095263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.095342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.095367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.095398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.095424 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.198507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.198555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.198568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.198585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.198597 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.223475 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/3.log" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.224196 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/2.log" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.226577 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" exitCode=1 Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.226615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.226646 4823 scope.go:117] "RemoveContainer" containerID="acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.227275 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:17:58 crc kubenswrapper[4823]: E0121 17:17:58.227404 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.243223 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.256075 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.266346 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d32b4b0e-e14f-475f-952a-f032f5161aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.278034 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.290301 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.301778 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.303796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.303839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.303862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.303878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.303888 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.315472 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.328940 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.340007 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:17:16.652978573 +0000 UTC Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.340353 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.353092 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.365143 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.377393 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"2026-01-21T17:16:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417\\\\n2026-01-21T17:16:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417 to /host/opt/cni/bin/\\\\n2026-01-21T17:16:59Z [verbose] multus-daemon started\\\\n2026-01-21T17:16:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T17:17:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.391110 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.403363 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.406199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.406237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.406253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.406274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.406289 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.420561 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:57Z\\\",\\\"message\\\":\\\"ry.go:160\\\\nI0121 17:17:56.486729 6860 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:56.486804 6860 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:56.487252 6860 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 17:17:56.487471 6860 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:56.487801 6860 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 17:17:56.487879 6860 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 17:17:56.487983 6860 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 17:17:56.488099 6860 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:56.488138 6860 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 17:17:56.488225 6860 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.431144 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.442627 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.451044 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:58Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.510042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.510119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.510142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.510169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.510187 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.613088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.613156 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.613176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.613197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.613216 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.715256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.715293 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.715303 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.715319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.715329 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.818754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.818801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.818812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.818830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.818841 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.921783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.921815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.921828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.921843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:58 crc kubenswrapper[4823]: I0121 17:17:58.921876 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:58Z","lastTransitionTime":"2026-01-21T17:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.024654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.024732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.024756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.024786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.024808 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.127964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.128295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.128444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.128625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.128762 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.230772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.230818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.230828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.230844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.230877 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.232327 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/3.log" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.332769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.332813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.332828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.332869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.332887 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.340960 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:11:37.008862324 +0000 UTC Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.343441 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.343466 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.343552 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.343555 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:17:59 crc kubenswrapper[4823]: E0121 17:17:59.343745 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:17:59 crc kubenswrapper[4823]: E0121 17:17:59.343792 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:17:59 crc kubenswrapper[4823]: E0121 17:17:59.343869 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:17:59 crc kubenswrapper[4823]: E0121 17:17:59.344036 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.359474 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.373460 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa6a639c01f3a9e2bffe386cf9a75796ccd5cd7a58b76c7a4fa3f88a0eaa2c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.387055 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aedcad4-c5da-40a2-a783-ce9096a63c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0778fb64032436b3a8fbb3a737c4975438fe33823c1b67c38c6fa08dfd89fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m48ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4m4vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.405837 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48951ca6-6148-41a8-bdc2-d753cf3ecea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96dabbe926abaca2e3930c8894e1154912d9d83bb758bb3b2f925c5dc5fdb2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f225d7773f180d1faedd7ed3e4dc8c93ac7093d8baa5651dc8ad15fb85c5ad0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba90a87ed9bda602b5abba6d6adebc51f5eb8fd4100e23c1e5f4d6f1c15b9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec6bcf4ef4d21f9e54882c51c4496aa05497da2cf300fa49007e74d669e5a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edc5c196a16304dd5da047d9cfead538147b3243508064e91db1aa0024b0229a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5bfb247ed05ba44c68e950843f3cf994129e2f56ff103a434acf5793565b71b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef018661153f255ff2accc8b74e4cc2882f1fd20ab0dfe50ef53a6c1f7470cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jv5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tsbbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.416975 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6pdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6b2e38e-0844-4f45-8f61-0e1ee997556a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca591b730a6638ebe21875fa5e47a89b90639460477166aec311cdc4f5718b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdmp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6pdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.428572 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1464437f-0858-4b4f-926f-f30800d61e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583de880d581841c3ae072cc7b81b34c35b982cb5d509ccc6e8ad0f8ae5b03ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceda979744470a54d63f6f93018a3dba655c7013a33b2718d63f1d532fc8c5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedfedcc18d282f043b86b50067e47af596e1a84b79de800d506ab36869a369d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2910d326c742709b2daa34b66f75655df210e05c6acf68204bd939f4e7fad770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.435699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.435723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.435731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.435744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.435753 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.439818 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-htjnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-htjnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.452551 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-skvzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea8699bd-e53a-443e-b2e5-0fe577f2c19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:45Z\\\",\\\"message\\\":\\\"2026-01-21T17:16:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417\\\\n2026-01-21T17:16:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fdb51e4e-344e-4e95-81d4-3985f62c7417 to /host/opt/cni/bin/\\\\n2026-01-21T17:16:59Z [verbose] multus-daemon started\\\\n2026-01-21T17:16:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T17:17:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlrpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-skvzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.464225 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eff6fe-74a2-4866-8002-700cebf3efbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c279847b174617c1041d385b9d9bf1d4d7656d7e754614aae9eca4ba9d793d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c2ef524cba75f0bc9f9f17fcdf258ae9e060a6357b77f03cf89e1df35073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:17:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-67glt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.474523 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5k6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a35441-43d2-44fd-b8c7-5fe354ebae4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7314092ee6ec0101289ec2c328f653214b3fc9b007f7f9db32c061811b16fcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lftvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5k6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.497699 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f1d66f-b00f-4e75-8130-43977e13eec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc011f8bfc47afb44ce28b52b612239098f1175cd2b2ff47fe9d3da1c133bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:27Z\\\",\\\"message\\\":\\\" neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 17:17:27.100595 6468 factory.go:656] Stopping watch factory\\\\nI0121 17:17:27.101429 6468 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:27.100976 6468 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 17:17:27.101951 6468 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 17:17:27.101101 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0121 17:17:27.102048 6468 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.961737ms\\\\nI0121 17:17:27.102064 6468 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nF0121 17:17:27.102061 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T17:17:57Z\\\",\\\"message\\\":\\\"ry.go:160\\\\nI0121 17:17:56.486729 6860 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:56.486804 6860 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:56.487252 6860 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 17:17:56.487471 6860 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 17:17:56.487801 6860 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 17:17:56.487879 6860 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 17:17:56.487983 6860 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 17:17:56.488099 6860 ovnkube.go:599] Stopped ovnkube\\\\nI0121 17:17:56.488138 6860 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 17:17:56.488225 6860 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2fdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7q2df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.510908 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.524417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.537683 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2741fe-963b-4e61-a838-0f6edada0da9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22260d19c2ad083ed348cf690a29dfd4831783d18dc8973b38abd9575b962272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20892a2a809c162d78419c4a8df1633be8e029047e6bf9d98dca55e6c121c021\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://635b2c7553e98bb97a73c9f6839e44d6428dbea2e0f8bb9394df27d4e4939db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.537800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.537824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.537835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.537872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.537890 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.549106 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8400e1f0a4fd9d5ab18d2c04049e942901c0088ac9b6e4d3f3ee07fb030bc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.561571 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2445b6fdc65a52d0263516b73a2329422bcd3b997805107cbc7d9eb901056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590e790675474ba49b3a31c05dd8ad3b066c68e8857ac73e64c5af24484cd975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.572725 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d32b4b0e-e14f-475f-952a-f032f5161aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b5ce32fb9cb2dd4ccaf6b27c00ec29029ec90005b80de18cd25fa41ac3d45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b5e565ec8855f0364b598ada77648d7e823e0e31e90bd085a70b9e3ed14ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.586114 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5153e76f-0977-41d2-a733-738cd41c36f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T17:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 17:16:51.831477 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 17:16:51.846976 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-407161818/tls.crt::/tmp/serving-cert-407161818/tls.key\\\\\\\"\\\\nI0121 17:16:57.497089 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 17:16:57.513430 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 17:16:57.513460 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 17:16:57.513492 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 17:16:57.513500 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 17:16:57.520316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 17:16:57.520336 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 17:16:57.520347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 17:16:57.520351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 17:16:57.520371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 17:16:57.520374 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 17:16:57.520443 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 17:16:57.526195 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T17:16:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T17:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T17:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T17:17:59Z is after 2025-08-24T17:21:41Z" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.640666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.640738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.640761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.640790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.640814 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.744102 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.744158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.744171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.744187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.744198 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.846728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.846787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.846811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.846838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.846891 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.950241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.950284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.950295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.950309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:17:59 crc kubenswrapper[4823]: I0121 17:17:59.950320 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:17:59Z","lastTransitionTime":"2026-01-21T17:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.052475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.052511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.052521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.052535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.052546 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.159053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.159123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.159163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.159195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.159216 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.261821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.261960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.261991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.262020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.262040 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.341880 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:38:10.343098727 +0000 UTC Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.364092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.364205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.364243 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.364279 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.364302 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.467026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.467060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.467070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.467086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.467098 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.570198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.570242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.570253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.570271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.570282 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.674273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.674321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.674332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.674354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.674364 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.777254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.777318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.777340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.777368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.777391 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.880740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.880808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.880831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.880896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.880920 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.984347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.984402 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.984421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.984445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:00 crc kubenswrapper[4823]: I0121 17:18:00.984462 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:00Z","lastTransitionTime":"2026-01-21T17:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.086985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.087028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.087040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.087057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.087069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.190965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.191027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.191044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.191070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.191087 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.294094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.294164 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.294187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.294210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.294227 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.343480 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344137 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.343642 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:05.343618732 +0000 UTC m=+146.269749632 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.343708 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:44:26.856566635 +0000 UTC Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.343779 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344253 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.343922 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344330 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.343511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.344493 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.344529 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.344563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.344587 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344701 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344719 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344730 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.344767 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 17:19:05.34475244 +0000 UTC m=+146.270883300 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.343684 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345236 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345377 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345408 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:19:05.345398687 +0000 UTC m=+146.271529547 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345542 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345556 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345566 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345594 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 17:19:05.345584881 +0000 UTC m=+146.271715781 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345720 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: E0121 17:18:01.345752 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 17:19:05.345741665 +0000 UTC m=+146.271872525 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.398460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.398534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.398550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.398602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.398624 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.501216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.501621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.501756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.501922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.502080 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.604383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.604697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.604707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.604724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.604738 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.707564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.707614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.707626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.707643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.707655 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.810312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.810377 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.810394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.810420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.810439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.913056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.913095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.913105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.913122 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:01 crc kubenswrapper[4823]: I0121 17:18:01.913135 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:01Z","lastTransitionTime":"2026-01-21T17:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.015968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.016021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.016037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.016062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.016078 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:02Z","lastTransitionTime":"2026-01-21T17:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.118361 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.118413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.118427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.118446 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.118457 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:02Z","lastTransitionTime":"2026-01-21T17:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.172800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.172841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.172874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.172892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.172903 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T17:18:02Z","lastTransitionTime":"2026-01-21T17:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.234256 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk"] Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.235635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.239060 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.239095 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.239071 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.242433 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.254160 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.254142484 podStartE2EDuration="17.254142484s" podCreationTimestamp="2026-01-21 17:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.253845446 +0000 UTC m=+83.179976316" watchObservedRunningTime="2026-01-21 17:18:02.254142484 +0000 UTC m=+83.180273344" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.270358 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=65.270339804 podStartE2EDuration="1m5.270339804s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.269942444 +0000 UTC m=+83.196073344" watchObservedRunningTime="2026-01-21 17:18:02.270339804 +0000 UTC m=+83.196470664" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.283777 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.283760724 podStartE2EDuration="1m5.283760724s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.283665021 +0000 UTC m=+83.209795881" watchObservedRunningTime="2026-01-21 17:18:02.283760724 +0000 UTC m=+83.209891584" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.335412 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tsbbs" podStartSLOduration=64.33538929 podStartE2EDuration="1m4.33538929s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.333520173 +0000 UTC m=+83.259651053" watchObservedRunningTime="2026-01-21 17:18:02.33538929 +0000 UTC m=+83.261520170" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.344546 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:31:47.415437633 +0000 UTC Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.344619 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.350408 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m6pdc" podStartSLOduration=64.35038626 podStartE2EDuration="1m4.35038626s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.349977159 +0000 UTC m=+83.276108049" watchObservedRunningTime="2026-01-21 17:18:02.35038626 +0000 UTC m=+83.276517170" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.353271 4823 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.378685 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.378777 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.378810 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.378905 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.378943 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.394491 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.39427432 podStartE2EDuration="35.39427432s" podCreationTimestamp="2026-01-21 17:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.379364063 +0000 UTC m=+83.305494933" watchObservedRunningTime="2026-01-21 17:18:02.39427432 +0000 UTC m=+83.320405190" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.428419 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podStartSLOduration=64.428404614 podStartE2EDuration="1m4.428404614s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.42823722 +0000 UTC m=+83.354368090" watchObservedRunningTime="2026-01-21 17:18:02.428404614 +0000 UTC m=+83.354535474" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.454000 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-67glt" podStartSLOduration=64.453981511 podStartE2EDuration="1m4.453981511s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.453829127 +0000 UTC m=+83.379959987" watchObservedRunningTime="2026-01-21 17:18:02.453981511 +0000 UTC m=+83.380112371" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.454628 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-skvzm" podStartSLOduration=64.454621937 podStartE2EDuration="1m4.454621937s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.443478065 +0000 UTC m=+83.369608925" watchObservedRunningTime="2026-01-21 17:18:02.454621937 +0000 UTC m=+83.380752797" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480088 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480137 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480172 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480215 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480265 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.480335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.481134 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.489704 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.502703 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8721037-e3f8-4e26-b4b1-db0ca8ab3236-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rr4bk\" (UID: \"c8721037-e3f8-4e26-b4b1-db0ca8ab3236\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.539586 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-q5k6p" podStartSLOduration=65.539564377 podStartE2EDuration="1m5.539564377s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:02.512109122 +0000 UTC m=+83.438239982" watchObservedRunningTime="2026-01-21 17:18:02.539564377 +0000 UTC m=+83.465695257" Jan 21 17:18:02 crc kubenswrapper[4823]: I0121 17:18:02.555316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.250421 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" event={"ID":"c8721037-e3f8-4e26-b4b1-db0ca8ab3236","Type":"ContainerStarted","Data":"38f82f1bb7755cdda5632c6e18fdb45133c8aaa84d0ce941385a3ebb8e6d3e83"} Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.250505 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" event={"ID":"c8721037-e3f8-4e26-b4b1-db0ca8ab3236","Type":"ContainerStarted","Data":"566051b8291b736d6bf5dc83a4e0d7228898b8cdd3f4226177bee1526d0872c9"} Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.270792 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rr4bk" podStartSLOduration=66.270760621 podStartE2EDuration="1m6.270760621s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:03.27034415 +0000 UTC m=+84.196475010" watchObservedRunningTime="2026-01-21 17:18:03.270760621 +0000 UTC m=+84.196891531" Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.343671 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.343788 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.343801 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:03 crc kubenswrapper[4823]: I0121 17:18:03.343790 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:03 crc kubenswrapper[4823]: E0121 17:18:03.343914 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:03 crc kubenswrapper[4823]: E0121 17:18:03.344110 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:03 crc kubenswrapper[4823]: E0121 17:18:03.344236 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:03 crc kubenswrapper[4823]: E0121 17:18:03.344343 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:05 crc kubenswrapper[4823]: I0121 17:18:05.342829 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:05 crc kubenswrapper[4823]: I0121 17:18:05.342948 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:05 crc kubenswrapper[4823]: I0121 17:18:05.342894 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:05 crc kubenswrapper[4823]: I0121 17:18:05.342840 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:05 crc kubenswrapper[4823]: E0121 17:18:05.343103 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:05 crc kubenswrapper[4823]: E0121 17:18:05.343275 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:05 crc kubenswrapper[4823]: E0121 17:18:05.343385 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:05 crc kubenswrapper[4823]: E0121 17:18:05.343511 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:07 crc kubenswrapper[4823]: I0121 17:18:07.343266 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:07 crc kubenswrapper[4823]: I0121 17:18:07.343305 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:07 crc kubenswrapper[4823]: I0121 17:18:07.343301 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:07 crc kubenswrapper[4823]: E0121 17:18:07.343444 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:07 crc kubenswrapper[4823]: E0121 17:18:07.343520 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:07 crc kubenswrapper[4823]: I0121 17:18:07.343542 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:07 crc kubenswrapper[4823]: E0121 17:18:07.343769 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:07 crc kubenswrapper[4823]: E0121 17:18:07.343847 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:09 crc kubenswrapper[4823]: I0121 17:18:09.343290 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:09 crc kubenswrapper[4823]: I0121 17:18:09.343393 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:09 crc kubenswrapper[4823]: E0121 17:18:09.344151 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:09 crc kubenswrapper[4823]: I0121 17:18:09.344215 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:09 crc kubenswrapper[4823]: I0121 17:18:09.344219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:09 crc kubenswrapper[4823]: E0121 17:18:09.344271 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:09 crc kubenswrapper[4823]: E0121 17:18:09.344424 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:09 crc kubenswrapper[4823]: I0121 17:18:09.345117 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:18:09 crc kubenswrapper[4823]: E0121 17:18:09.345318 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:18:09 crc kubenswrapper[4823]: E0121 17:18:09.345426 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:11 crc kubenswrapper[4823]: I0121 17:18:11.343591 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:11 crc kubenswrapper[4823]: E0121 17:18:11.344618 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:11 crc kubenswrapper[4823]: I0121 17:18:11.343635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:11 crc kubenswrapper[4823]: E0121 17:18:11.344854 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:11 crc kubenswrapper[4823]: I0121 17:18:11.343729 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:11 crc kubenswrapper[4823]: I0121 17:18:11.343596 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:11 crc kubenswrapper[4823]: E0121 17:18:11.345147 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:11 crc kubenswrapper[4823]: E0121 17:18:11.345547 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:13 crc kubenswrapper[4823]: I0121 17:18:13.343651 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:13 crc kubenswrapper[4823]: I0121 17:18:13.343891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:13 crc kubenswrapper[4823]: I0121 17:18:13.343932 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:13 crc kubenswrapper[4823]: I0121 17:18:13.344007 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:13 crc kubenswrapper[4823]: E0121 17:18:13.345273 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:13 crc kubenswrapper[4823]: E0121 17:18:13.345431 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:13 crc kubenswrapper[4823]: E0121 17:18:13.345538 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:13 crc kubenswrapper[4823]: E0121 17:18:13.345726 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:15 crc kubenswrapper[4823]: I0121 17:18:15.344516 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:15 crc kubenswrapper[4823]: E0121 17:18:15.344692 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:15 crc kubenswrapper[4823]: I0121 17:18:15.345109 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:15 crc kubenswrapper[4823]: I0121 17:18:15.345392 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:15 crc kubenswrapper[4823]: I0121 17:18:15.345386 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:15 crc kubenswrapper[4823]: E0121 17:18:15.345559 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:15 crc kubenswrapper[4823]: E0121 17:18:15.345876 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:15 crc kubenswrapper[4823]: E0121 17:18:15.345971 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:15 crc kubenswrapper[4823]: I0121 17:18:15.359804 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 17:18:15 crc kubenswrapper[4823]: I0121 17:18:15.833583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:15 crc kubenswrapper[4823]: E0121 17:18:15.833716 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:18:15 crc kubenswrapper[4823]: E0121 17:18:15.833766 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs podName:9bcd33a4-ea1e-4977-8456-e34f2ed4c680 nodeName:}" failed. No retries permitted until 2026-01-21 17:19:19.833754031 +0000 UTC m=+160.759884881 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs") pod "network-metrics-daemon-htjnl" (UID: "9bcd33a4-ea1e-4977-8456-e34f2ed4c680") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 17:18:17 crc kubenswrapper[4823]: I0121 17:18:17.343223 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:17 crc kubenswrapper[4823]: E0121 17:18:17.344122 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:17 crc kubenswrapper[4823]: I0121 17:18:17.343363 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:17 crc kubenswrapper[4823]: E0121 17:18:17.344335 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:17 crc kubenswrapper[4823]: I0121 17:18:17.343406 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:17 crc kubenswrapper[4823]: I0121 17:18:17.343306 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:17 crc kubenswrapper[4823]: E0121 17:18:17.344717 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:17 crc kubenswrapper[4823]: E0121 17:18:17.344523 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:19 crc kubenswrapper[4823]: I0121 17:18:19.342967 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:19 crc kubenswrapper[4823]: I0121 17:18:19.342978 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:19 crc kubenswrapper[4823]: I0121 17:18:19.343030 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:19 crc kubenswrapper[4823]: I0121 17:18:19.343068 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:19 crc kubenswrapper[4823]: E0121 17:18:19.344395 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:19 crc kubenswrapper[4823]: E0121 17:18:19.344587 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:19 crc kubenswrapper[4823]: E0121 17:18:19.344684 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:19 crc kubenswrapper[4823]: E0121 17:18:19.344739 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:21 crc kubenswrapper[4823]: I0121 17:18:21.342942 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:21 crc kubenswrapper[4823]: I0121 17:18:21.343037 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:21 crc kubenswrapper[4823]: I0121 17:18:21.343102 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:21 crc kubenswrapper[4823]: I0121 17:18:21.343121 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:21 crc kubenswrapper[4823]: E0121 17:18:21.343238 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:21 crc kubenswrapper[4823]: E0121 17:18:21.343411 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:21 crc kubenswrapper[4823]: E0121 17:18:21.343469 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:21 crc kubenswrapper[4823]: E0121 17:18:21.343773 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:23 crc kubenswrapper[4823]: I0121 17:18:23.342654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:23 crc kubenswrapper[4823]: I0121 17:18:23.342785 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:23 crc kubenswrapper[4823]: E0121 17:18:23.342952 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:23 crc kubenswrapper[4823]: I0121 17:18:23.343021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:23 crc kubenswrapper[4823]: I0121 17:18:23.342967 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:23 crc kubenswrapper[4823]: E0121 17:18:23.343148 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:23 crc kubenswrapper[4823]: E0121 17:18:23.344248 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:23 crc kubenswrapper[4823]: E0121 17:18:23.344409 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:23 crc kubenswrapper[4823]: I0121 17:18:23.345124 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:18:23 crc kubenswrapper[4823]: E0121 17:18:23.345464 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:18:25 crc kubenswrapper[4823]: I0121 17:18:25.342692 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:25 crc kubenswrapper[4823]: I0121 17:18:25.342756 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:25 crc kubenswrapper[4823]: I0121 17:18:25.342805 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:25 crc kubenswrapper[4823]: I0121 17:18:25.342775 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:25 crc kubenswrapper[4823]: E0121 17:18:25.343282 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:25 crc kubenswrapper[4823]: E0121 17:18:25.343448 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:25 crc kubenswrapper[4823]: E0121 17:18:25.343548 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:25 crc kubenswrapper[4823]: E0121 17:18:25.343616 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:27 crc kubenswrapper[4823]: I0121 17:18:27.343435 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:27 crc kubenswrapper[4823]: I0121 17:18:27.343492 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:27 crc kubenswrapper[4823]: I0121 17:18:27.343456 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:27 crc kubenswrapper[4823]: I0121 17:18:27.343455 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:27 crc kubenswrapper[4823]: E0121 17:18:27.343605 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:27 crc kubenswrapper[4823]: E0121 17:18:27.343666 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:27 crc kubenswrapper[4823]: E0121 17:18:27.343789 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:27 crc kubenswrapper[4823]: E0121 17:18:27.343976 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:29 crc kubenswrapper[4823]: I0121 17:18:29.343393 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:29 crc kubenswrapper[4823]: I0121 17:18:29.343412 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:29 crc kubenswrapper[4823]: E0121 17:18:29.344619 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:29 crc kubenswrapper[4823]: I0121 17:18:29.344727 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:29 crc kubenswrapper[4823]: I0121 17:18:29.344746 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:29 crc kubenswrapper[4823]: E0121 17:18:29.344958 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:29 crc kubenswrapper[4823]: E0121 17:18:29.345144 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:29 crc kubenswrapper[4823]: E0121 17:18:29.345243 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.343099 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:31 crc kubenswrapper[4823]: E0121 17:18:31.343295 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.343593 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:31 crc kubenswrapper[4823]: E0121 17:18:31.343696 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.344487 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.344570 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:31 crc kubenswrapper[4823]: E0121 17:18:31.344679 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:31 crc kubenswrapper[4823]: E0121 17:18:31.344834 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.345362 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/1.log" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.345975 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/0.log" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.346011 4823 generic.go:334] "Generic (PLEG): container finished" podID="ea8699bd-e53a-443e-b2e5-0fe577f2c19f" containerID="49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a" exitCode=1 Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.348893 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerDied","Data":"49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a"} Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.348966 4823 scope.go:117] "RemoveContainer" containerID="788542fad75c176426c01f4073d15c6ab73f1425a0da1e91426812689bd2b4f5" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.349385 4823 scope.go:117] "RemoveContainer" containerID="49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a" Jan 21 17:18:31 crc kubenswrapper[4823]: E0121 17:18:31.349600 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-skvzm_openshift-multus(ea8699bd-e53a-443e-b2e5-0fe577f2c19f)\"" pod="openshift-multus/multus-skvzm" podUID="ea8699bd-e53a-443e-b2e5-0fe577f2c19f" Jan 21 17:18:31 crc kubenswrapper[4823]: I0121 17:18:31.381809 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.381788513 podStartE2EDuration="16.381788513s" podCreationTimestamp="2026-01-21 17:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:19.374663139 +0000 UTC m=+100.300793999" watchObservedRunningTime="2026-01-21 17:18:31.381788513 +0000 UTC m=+112.307919373" Jan 21 17:18:32 crc kubenswrapper[4823]: I0121 17:18:32.352516 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/1.log" Jan 21 17:18:33 crc kubenswrapper[4823]: I0121 17:18:33.343650 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:33 crc kubenswrapper[4823]: I0121 17:18:33.343662 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:33 crc kubenswrapper[4823]: I0121 17:18:33.343662 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:33 crc kubenswrapper[4823]: E0121 17:18:33.344280 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:33 crc kubenswrapper[4823]: E0121 17:18:33.344146 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:33 crc kubenswrapper[4823]: E0121 17:18:33.344381 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:33 crc kubenswrapper[4823]: I0121 17:18:33.343709 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:33 crc kubenswrapper[4823]: E0121 17:18:33.344478 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:34 crc kubenswrapper[4823]: I0121 17:18:34.344085 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:18:34 crc kubenswrapper[4823]: E0121 17:18:34.344257 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7q2df_openshift-ovn-kubernetes(b5f1d66f-b00f-4e75-8130-43977e13eec8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" Jan 21 17:18:35 crc kubenswrapper[4823]: I0121 17:18:35.342884 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:35 crc kubenswrapper[4823]: I0121 17:18:35.342967 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:35 crc kubenswrapper[4823]: E0121 17:18:35.343010 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:35 crc kubenswrapper[4823]: I0121 17:18:35.343041 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:35 crc kubenswrapper[4823]: E0121 17:18:35.343164 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:35 crc kubenswrapper[4823]: I0121 17:18:35.343268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:35 crc kubenswrapper[4823]: E0121 17:18:35.343304 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:35 crc kubenswrapper[4823]: E0121 17:18:35.343518 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:37 crc kubenswrapper[4823]: I0121 17:18:37.343710 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:37 crc kubenswrapper[4823]: I0121 17:18:37.343906 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:37 crc kubenswrapper[4823]: I0121 17:18:37.344001 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:37 crc kubenswrapper[4823]: I0121 17:18:37.344037 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:37 crc kubenswrapper[4823]: E0121 17:18:37.344044 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:37 crc kubenswrapper[4823]: E0121 17:18:37.344151 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:37 crc kubenswrapper[4823]: E0121 17:18:37.344243 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:37 crc kubenswrapper[4823]: E0121 17:18:37.344312 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:39 crc kubenswrapper[4823]: I0121 17:18:39.343500 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:39 crc kubenswrapper[4823]: I0121 17:18:39.343590 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:39 crc kubenswrapper[4823]: E0121 17:18:39.344419 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:39 crc kubenswrapper[4823]: I0121 17:18:39.344438 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:39 crc kubenswrapper[4823]: I0121 17:18:39.344476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:39 crc kubenswrapper[4823]: E0121 17:18:39.344579 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:39 crc kubenswrapper[4823]: E0121 17:18:39.344677 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:39 crc kubenswrapper[4823]: E0121 17:18:39.344733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:39 crc kubenswrapper[4823]: E0121 17:18:39.358697 4823 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 17:18:40 crc kubenswrapper[4823]: E0121 17:18:40.219241 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 17:18:41 crc kubenswrapper[4823]: I0121 17:18:41.343145 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:41 crc kubenswrapper[4823]: I0121 17:18:41.343224 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:41 crc kubenswrapper[4823]: I0121 17:18:41.343284 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:41 crc kubenswrapper[4823]: E0121 17:18:41.343446 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:41 crc kubenswrapper[4823]: I0121 17:18:41.343749 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:41 crc kubenswrapper[4823]: E0121 17:18:41.343826 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:41 crc kubenswrapper[4823]: E0121 17:18:41.344062 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:41 crc kubenswrapper[4823]: E0121 17:18:41.344138 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:43 crc kubenswrapper[4823]: I0121 17:18:43.342990 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:43 crc kubenswrapper[4823]: I0121 17:18:43.343291 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:43 crc kubenswrapper[4823]: I0121 17:18:43.343343 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:43 crc kubenswrapper[4823]: I0121 17:18:43.343313 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:43 crc kubenswrapper[4823]: E0121 17:18:43.343546 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:43 crc kubenswrapper[4823]: E0121 17:18:43.343740 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:43 crc kubenswrapper[4823]: E0121 17:18:43.343979 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:43 crc kubenswrapper[4823]: E0121 17:18:43.344152 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:45 crc kubenswrapper[4823]: E0121 17:18:45.220688 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 17:18:45 crc kubenswrapper[4823]: I0121 17:18:45.343045 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:45 crc kubenswrapper[4823]: I0121 17:18:45.343053 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:45 crc kubenswrapper[4823]: I0121 17:18:45.343076 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:45 crc kubenswrapper[4823]: E0121 17:18:45.343251 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:45 crc kubenswrapper[4823]: I0121 17:18:45.343282 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:45 crc kubenswrapper[4823]: E0121 17:18:45.343351 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:45 crc kubenswrapper[4823]: E0121 17:18:45.343408 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:45 crc kubenswrapper[4823]: E0121 17:18:45.343458 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:45 crc kubenswrapper[4823]: I0121 17:18:45.343912 4823 scope.go:117] "RemoveContainer" containerID="49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a" Jan 21 17:18:46 crc kubenswrapper[4823]: I0121 17:18:46.344075 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:18:46 crc kubenswrapper[4823]: I0121 17:18:46.404903 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/1.log" Jan 21 17:18:46 crc kubenswrapper[4823]: I0121 17:18:46.404977 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerStarted","Data":"3d8c899ad3979e18f7a6f8a0287b50cebebda6042365ff774c1cd367e9563469"} Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.185419 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-htjnl"] Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.185568 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:47 crc kubenswrapper[4823]: E0121 17:18:47.185672 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.343518 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.343567 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.343588 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:47 crc kubenswrapper[4823]: E0121 17:18:47.343655 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:47 crc kubenswrapper[4823]: E0121 17:18:47.343735 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:47 crc kubenswrapper[4823]: E0121 17:18:47.343798 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.409952 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/3.log" Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.412186 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerStarted","Data":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.412703 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:18:47 crc kubenswrapper[4823]: I0121 17:18:47.438884 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podStartSLOduration=109.43884807 podStartE2EDuration="1m49.43884807s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:47.437481445 +0000 UTC m=+128.363612305" watchObservedRunningTime="2026-01-21 17:18:47.43884807 +0000 UTC m=+128.364978930" Jan 21 17:18:48 crc kubenswrapper[4823]: I0121 17:18:48.343119 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:48 crc kubenswrapper[4823]: E0121 17:18:48.343622 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-htjnl" podUID="9bcd33a4-ea1e-4977-8456-e34f2ed4c680" Jan 21 17:18:49 crc kubenswrapper[4823]: I0121 17:18:49.343204 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:49 crc kubenswrapper[4823]: I0121 17:18:49.343272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:49 crc kubenswrapper[4823]: I0121 17:18:49.343229 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:49 crc kubenswrapper[4823]: E0121 17:18:49.344575 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 17:18:49 crc kubenswrapper[4823]: E0121 17:18:49.344692 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 17:18:49 crc kubenswrapper[4823]: E0121 17:18:49.344824 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 17:18:50 crc kubenswrapper[4823]: I0121 17:18:50.342731 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:18:50 crc kubenswrapper[4823]: I0121 17:18:50.346926 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 17:18:50 crc kubenswrapper[4823]: I0121 17:18:50.348770 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.343084 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.343849 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.344294 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.347430 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.347826 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.347827 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 17:18:51 crc kubenswrapper[4823]: I0121 17:18:51.350704 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.370974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.419375 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qxt5l"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.420465 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.420659 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.423270 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.423424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.425464 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.430773 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.431192 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.431407 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-chzvq"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.431279 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.432200 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.432383 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.432530 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.432581 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.432804 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.432892 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.433095 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.433236 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.433556 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.433687 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.433727 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.433976 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.434163 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.434417 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.434726 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.435035 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tnrwq"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.435314 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.435517 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.437006 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.439105 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.439623 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.440007 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.443670 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qdbwc"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.444131 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.444826 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.444914 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445067 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.444842 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445271 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445334 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4dj4t"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445490 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445776 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445833 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.445963 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446111 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446275 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446409 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446445 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446538 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446645 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446707 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.446845 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.447014 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.447181 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.447285 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.447661 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.449226 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.449489 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.449672 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.449824 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.455473 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jmjt6"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.456223 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.458183 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.458778 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.459438 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5g629"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.460057 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.463001 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.463039 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.463517 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.464245 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.464806 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478295 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-etcd-client\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478323 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9lf\" (UniqueName: \"kubernetes.io/projected/eff795e9-5c79-4605-8065-56c14471445f-kube-api-access-sg9lf\") pod \"cluster-samples-operator-665b6dd947-vchb4\" (UID: \"eff795e9-5c79-4605-8065-56c14471445f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478351 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-client-ca\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478372 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e52e7f7b-367a-496e-8979-cf99488572e9-serving-cert\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e52e7f7b-367a-496e-8979-cf99488572e9-trusted-ca\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-config\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478436 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9sr\" (UniqueName: \"kubernetes.io/projected/72b0a364-8275-4bc5-bfcf-744d1caf430e-kube-api-access-tb9sr\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478478 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-oauth-serving-cert\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa89460-c316-4b9f-9060-44ad75e28e05-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478539 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lqf\" (UniqueName: \"kubernetes.io/projected/36ae04e1-ee34-492a-b2af-012a3fb66740-kube-api-access-g9lqf\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478563 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-service-ca\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478582 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkn4\" (UniqueName: \"kubernetes.io/projected/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-kube-api-access-4hkn4\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52e7f7b-367a-496e-8979-cf99488572e9-config\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92dl\" (UniqueName: \"kubernetes.io/projected/ac886837-67ac-48e7-b5cb-024a0ed1ea01-kube-api-access-q92dl\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ae04e1-ee34-492a-b2af-012a3fb66740-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478667 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa305dd8-20a7-4adb-9c38-1f4cee672164-serving-cert\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478692 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fa305dd8-20a7-4adb-9c38-1f4cee672164-available-featuregates\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478714 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-client-ca\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478737 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvqn\" (UniqueName: \"kubernetes.io/projected/c474281f-3344-4846-9dd2-b78f8c3b7145-kube-api-access-hcvqn\") pod \"downloads-7954f5f757-jmjt6\" (UID: \"c474281f-3344-4846-9dd2-b78f8c3b7145\") " pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478759 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-config\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478779 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-audit-policies\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478801 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-auth-proxy-config\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/36ae04e1-ee34-492a-b2af-012a3fb66740-images\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478875 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-oauth-config\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478901 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfvj\" (UniqueName: \"kubernetes.io/projected/fa305dd8-20a7-4adb-9c38-1f4cee672164-kube-api-access-txfvj\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-serving-cert\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478944 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-serving-cert\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478962 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-serving-cert\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.478992 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eff795e9-5c79-4605-8065-56c14471445f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vchb4\" (UID: \"eff795e9-5c79-4605-8065-56c14471445f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-trusted-ca-bundle\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-encryption-config\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479055 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ae04e1-ee34-492a-b2af-012a3fb66740-config\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479076 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-machine-approver-tls\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-serving-cert\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479138 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-config\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479159 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td6x9\" (UniqueName: \"kubernetes.io/projected/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-kube-api-access-td6x9\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72b0a364-8275-4bc5-bfcf-744d1caf430e-audit-dir\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479201 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59l8p\" (UniqueName: \"kubernetes.io/projected/e52e7f7b-367a-496e-8979-cf99488572e9-kube-api-access-59l8p\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479222 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-config\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479244 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa89460-c316-4b9f-9060-44ad75e28e05-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479266 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv92n\" (UniqueName: \"kubernetes.io/projected/baa89460-c316-4b9f-9060-44ad75e28e05-kube-api-access-wv92n\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.479306 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psph\" (UniqueName: \"kubernetes.io/projected/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-kube-api-access-7psph\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.487702 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.488007 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.488226 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.488518 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.488723 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.489215 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.494979 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.495804 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.496077 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.496448 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.496510 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.496926 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.498739 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.498772 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.499068 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.499727 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.499752 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.496456 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.499980 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.500455 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.500516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.514236 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.514393 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.514486 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.514706 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.514947 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.515108 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.515212 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.515330 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517018 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517344 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517424 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517454 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517363 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517767 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.517997 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.522463 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dc99l"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.523004 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.523172 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.523319 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.523321 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.523530 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.524799 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wzpd6"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.525343 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.526244 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vw29d"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.526772 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.527508 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.529281 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.530046 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.530601 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.530732 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.533829 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.534084 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.534401 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.534621 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.534781 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.535893 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vttrq"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.535977 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.536281 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8nkd"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.536395 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.536428 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.537347 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.537895 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538148 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538392 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538507 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538603 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538659 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538719 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538394 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538774 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538817 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vzq92"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.538920 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.539043 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.539436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.542879 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.546413 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.548754 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nrwt6"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.549232 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.549478 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z797g"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.549746 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.549875 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.550346 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.550676 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.558993 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.563277 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-chzvq"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.563449 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.564352 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.568088 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.584923 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.585884 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qdbwc"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.585918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-image-import-ca\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.585943 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.585984 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9sr\" (UniqueName: \"kubernetes.io/projected/72b0a364-8275-4bc5-bfcf-744d1caf430e-kube-api-access-tb9sr\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586031 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-proxy-tls\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586064 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/b4c39836-a956-4dac-a063-a9a8aadfff84-kube-api-access-p7nb9\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586091 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-dir\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586117 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvng\" (UniqueName: \"kubernetes.io/projected/b61875bd-8183-49ad-a085-7478b5b97ca8-kube-api-access-vbvng\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-etcd-serving-ca\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5n8k\" (UniqueName: \"kubernetes.io/projected/cb80c5ee-bf4c-4eea-be92-0852730ef914-kube-api-access-l5n8k\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586229 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-config\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586250 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586302 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb17a243-ae06-4606-836d-52a1a620bfe6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586352 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9lqf\" (UniqueName: \"kubernetes.io/projected/36ae04e1-ee34-492a-b2af-012a3fb66740-kube-api-access-g9lqf\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586372 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-oauth-serving-cert\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586511 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa89460-c316-4b9f-9060-44ad75e28e05-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586580 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586611 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-service-ca\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586636 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-node-pullsecrets\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586659 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-audit\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586701 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586709 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkn4\" (UniqueName: \"kubernetes.io/projected/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-kube-api-access-4hkn4\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586733 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52e7f7b-367a-496e-8979-cf99488572e9-config\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92dl\" (UniqueName: \"kubernetes.io/projected/ac886837-67ac-48e7-b5cb-024a0ed1ea01-kube-api-access-q92dl\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-config\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ae04e1-ee34-492a-b2af-012a3fb66740-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586844 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xm6\" (UniqueName: \"kubernetes.io/projected/c9274abf-35a8-4c79-9995-142413907ffc-kube-api-access-x2xm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586891 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm629\" (UniqueName: \"kubernetes.io/projected/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-kube-api-access-lm629\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586921 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa305dd8-20a7-4adb-9c38-1f4cee672164-serving-cert\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-oauth-serving-cert\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587307 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fa305dd8-20a7-4adb-9c38-1f4cee672164-available-featuregates\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587386 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-client-ca\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587423 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wc44\" (UniqueName: \"kubernetes.io/projected/2e548be0-d2ab-4bad-a18c-cbe203dbb314-kube-api-access-9wc44\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587451 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-config\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587570 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcvqn\" (UniqueName: \"kubernetes.io/projected/c474281f-3344-4846-9dd2-b78f8c3b7145-kube-api-access-hcvqn\") pod \"downloads-7954f5f757-jmjt6\" (UID: \"c474281f-3344-4846-9dd2-b78f8c3b7145\") " pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587603 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-audit-policies\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-auth-proxy-config\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587657 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-audit-dir\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587684 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/36ae04e1-ee34-492a-b2af-012a3fb66740-images\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587754 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-oauth-config\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587979 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb80c5ee-bf4c-4eea-be92-0852730ef914-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.587994 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-service-ca\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfvj\" (UniqueName: \"kubernetes.io/projected/fa305dd8-20a7-4adb-9c38-1f4cee672164-kube-api-access-txfvj\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-serving-cert\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588253 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-service-ca\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-serving-cert\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-serving-cert\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588340 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eff795e9-5c79-4605-8065-56c14471445f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vchb4\" (UID: \"eff795e9-5c79-4605-8065-56c14471445f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588365 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg4hm\" (UniqueName: \"kubernetes.io/projected/168ed525-dbb5-4d4b-967f-0fadb7a7f53f-kube-api-access-pg4hm\") pod \"multus-admission-controller-857f4d67dd-vzq92\" (UID: \"168ed525-dbb5-4d4b-967f-0fadb7a7f53f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588391 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb80c5ee-bf4c-4eea-be92-0852730ef914-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-encryption-config\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-trusted-ca-bundle\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588469 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2rp\" (UniqueName: \"kubernetes.io/projected/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-kube-api-access-tw2rp\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588496 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ae04e1-ee34-492a-b2af-012a3fb66740-config\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588549 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-machine-approver-tls\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588573 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f3cfefd-b182-4468-a496-dd3c08e72508-metrics-tls\") pod \"dns-operator-744455d44c-dc99l\" (UID: \"7f3cfefd-b182-4468-a496-dd3c08e72508\") " pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-serving-cert\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-etcd-client\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t44bf\" (UniqueName: \"kubernetes.io/projected/7f3cfefd-b182-4468-a496-dd3c08e72508-kube-api-access-t44bf\") pod \"dns-operator-744455d44c-dc99l\" (UID: \"7f3cfefd-b182-4468-a496-dd3c08e72508\") " pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb17a243-ae06-4606-836d-52a1a620bfe6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588699 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.586735 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jmjt6"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588724 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72b0a364-8275-4bc5-bfcf-744d1caf430e-audit-dir\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588745 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-encryption-config\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588755 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-config\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588847 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588996 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-config\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.589275 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-audit-policies\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.589518 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.589790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-auth-proxy-config\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.588847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td6x9\" (UniqueName: \"kubernetes.io/projected/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-kube-api-access-td6x9\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590074 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590135 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa89460-c316-4b9f-9060-44ad75e28e05-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv92n\" (UniqueName: \"kubernetes.io/projected/baa89460-c316-4b9f-9060-44ad75e28e05-kube-api-access-wv92n\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59l8p\" (UniqueName: \"kubernetes.io/projected/e52e7f7b-367a-496e-8979-cf99488572e9-kube-api-access-59l8p\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590220 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-config\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-policies\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590280 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-client\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590299 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590322 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590357 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psph\" (UniqueName: \"kubernetes.io/projected/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-kube-api-access-7psph\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590381 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7cn2\" (UniqueName: \"kubernetes.io/projected/fb17a243-ae06-4606-836d-52a1a620bfe6-kube-api-access-g7cn2\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590401 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590424 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9lf\" (UniqueName: \"kubernetes.io/projected/eff795e9-5c79-4605-8065-56c14471445f-kube-api-access-sg9lf\") pod \"cluster-samples-operator-665b6dd947-vchb4\" (UID: \"eff795e9-5c79-4605-8065-56c14471445f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9274abf-35a8-4c79-9995-142413907ffc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590471 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-etcd-client\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590513 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb80c5ee-bf4c-4eea-be92-0852730ef914-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c39836-a956-4dac-a063-a9a8aadfff84-serving-cert\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590558 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-client-ca\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590576 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e52e7f7b-367a-496e-8979-cf99488572e9-serving-cert\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590596 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-serving-cert\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590635 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590659 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e52e7f7b-367a-496e-8979-cf99488572e9-trusted-ca\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590678 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9274abf-35a8-4c79-9995-142413907ffc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590699 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-config\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/168ed525-dbb5-4d4b-967f-0fadb7a7f53f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vzq92\" (UID: \"168ed525-dbb5-4d4b-967f-0fadb7a7f53f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-ca\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.590892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-trusted-ca-bundle\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.591145 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52e7f7b-367a-496e-8979-cf99488572e9-config\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.591934 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ae04e1-ee34-492a-b2af-012a3fb66740-config\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.592066 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72b0a364-8275-4bc5-bfcf-744d1caf430e-audit-dir\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.592480 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72b0a364-8275-4bc5-bfcf-744d1caf430e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.594303 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-config\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.595635 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-machine-approver-tls\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.595730 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa89460-c316-4b9f-9060-44ad75e28e05-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.595876 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-config\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.596040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fa305dd8-20a7-4adb-9c38-1f4cee672164-available-featuregates\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.596611 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-client-ca\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.597013 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.597684 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-client-ca\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.597706 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-serving-cert\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.598269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-oauth-config\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.598592 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-config\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.598827 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tnrwq"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.598903 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.599140 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e52e7f7b-367a-496e-8979-cf99488572e9-trusted-ca\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.599452 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q5pd7"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.599669 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-etcd-client\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.599804 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.599826 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.600051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/36ae04e1-ee34-492a-b2af-012a3fb66740-images\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.601074 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa305dd8-20a7-4adb-9c38-1f4cee672164-serving-cert\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.601226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eff795e9-5c79-4605-8065-56c14471445f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vchb4\" (UID: \"eff795e9-5c79-4605-8065-56c14471445f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.601264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-serving-cert\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.601818 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.605743 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa89460-c316-4b9f-9060-44ad75e28e05-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.606339 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e52e7f7b-367a-496e-8979-cf99488572e9-serving-cert\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.606631 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.607318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.607885 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.608287 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-serving-cert\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.608554 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.608545 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ae04e1-ee34-492a-b2af-012a3fb66740-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.610564 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rmxv5"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.611316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.611579 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.611941 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.613476 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4dj4t"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.614435 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vw29d"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.614523 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-encryption-config\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.615402 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dc99l"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.616098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72b0a364-8275-4bc5-bfcf-744d1caf430e-serving-cert\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.619244 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.619740 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhlfg"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.620087 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.620680 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.621057 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.621082 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.627846 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.628125 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.628792 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.628905 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.630009 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.631239 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nrwt6"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.631324 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.631958 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5g629"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.632930 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.633847 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wzpd6"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.634995 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.636527 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.636898 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.637918 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8nkd"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.638928 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.639913 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.640912 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.641774 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4qqzs"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.642396 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.642869 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n4ttw"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.642998 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.643819 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vzq92"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.643865 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.650724 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.655330 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.656989 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.659901 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.661555 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rmxv5"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.661839 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.662692 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.663709 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.664896 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.665768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z797g"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.666834 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q5pd7"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.667847 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.670040 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhlfg"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.671086 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qxt5l"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.672126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n4ttw"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.673132 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.674334 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nxscj"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.675239 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.675380 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nxscj"] Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.682825 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691575 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb80c5ee-bf4c-4eea-be92-0852730ef914-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691603 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-service-ca\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691650 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg4hm\" (UniqueName: \"kubernetes.io/projected/168ed525-dbb5-4d4b-967f-0fadb7a7f53f-kube-api-access-pg4hm\") pod \"multus-admission-controller-857f4d67dd-vzq92\" (UID: \"168ed525-dbb5-4d4b-967f-0fadb7a7f53f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw2rp\" (UniqueName: \"kubernetes.io/projected/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-kube-api-access-tw2rp\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb80c5ee-bf4c-4eea-be92-0852730ef914-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f3cfefd-b182-4468-a496-dd3c08e72508-metrics-tls\") pod \"dns-operator-744455d44c-dc99l\" (UID: \"7f3cfefd-b182-4468-a496-dd3c08e72508\") " pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-etcd-client\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t44bf\" (UniqueName: \"kubernetes.io/projected/7f3cfefd-b182-4468-a496-dd3c08e72508-kube-api-access-t44bf\") pod \"dns-operator-744455d44c-dc99l\" (UID: \"7f3cfefd-b182-4468-a496-dd3c08e72508\") " pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb17a243-ae06-4606-836d-52a1a620bfe6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691804 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-encryption-config\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-policies\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691939 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-client\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691956 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.691995 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7cn2\" (UniqueName: \"kubernetes.io/projected/fb17a243-ae06-4606-836d-52a1a620bfe6-kube-api-access-g7cn2\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692010 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9274abf-35a8-4c79-9995-142413907ffc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb80c5ee-bf4c-4eea-be92-0852730ef914-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692089 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c39836-a956-4dac-a063-a9a8aadfff84-serving-cert\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-serving-cert\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692138 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9274abf-35a8-4c79-9995-142413907ffc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/168ed525-dbb5-4d4b-967f-0fadb7a7f53f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vzq92\" (UID: \"168ed525-dbb5-4d4b-967f-0fadb7a7f53f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692228 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-ca\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692259 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-image-import-ca\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692279 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-dir\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvng\" (UniqueName: \"kubernetes.io/projected/b61875bd-8183-49ad-a085-7478b5b97ca8-kube-api-access-vbvng\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692336 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-proxy-tls\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692352 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/b4c39836-a956-4dac-a063-a9a8aadfff84-kube-api-access-p7nb9\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692370 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-etcd-serving-ca\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5n8k\" (UniqueName: \"kubernetes.io/projected/cb80c5ee-bf4c-4eea-be92-0852730ef914-kube-api-access-l5n8k\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692403 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-config\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692425 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb17a243-ae06-4606-836d-52a1a620bfe6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb17a243-ae06-4606-836d-52a1a620bfe6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-audit\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.692481 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-audit\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693201 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-service-ca\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-node-pullsecrets\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-node-pullsecrets\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693299 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-ca\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693321 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-config\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693345 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xm6\" (UniqueName: \"kubernetes.io/projected/c9274abf-35a8-4c79-9995-142413907ffc-kube-api-access-x2xm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693372 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm629\" (UniqueName: \"kubernetes.io/projected/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-kube-api-access-lm629\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693397 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wc44\" (UniqueName: \"kubernetes.io/projected/2e548be0-d2ab-4bad-a18c-cbe203dbb314-kube-api-access-9wc44\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693420 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693454 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-audit-dir\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.693525 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-audit-dir\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.694001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-dir\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.694192 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb80c5ee-bf4c-4eea-be92-0852730ef914-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.694352 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-etcd-serving-ca\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.694538 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-image-import-ca\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.694800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9274abf-35a8-4c79-9995-142413907ffc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.694804 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-config\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.695225 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-etcd-client\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.695747 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.696118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.696295 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.696316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb80c5ee-bf4c-4eea-be92-0852730ef914-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.696371 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb17a243-ae06-4606-836d-52a1a620bfe6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.696675 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4c39836-a956-4dac-a063-a9a8aadfff84-serving-cert\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.697223 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4c39836-a956-4dac-a063-a9a8aadfff84-config\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.697330 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9274abf-35a8-4c79-9995-142413907ffc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.697799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-encryption-config\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.698329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b4c39836-a956-4dac-a063-a9a8aadfff84-etcd-client\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.701079 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f3cfefd-b182-4468-a496-dd3c08e72508-metrics-tls\") pod \"dns-operator-744455d44c-dc99l\" (UID: \"7f3cfefd-b182-4468-a496-dd3c08e72508\") " pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.702187 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.711449 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.714226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-serving-cert\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.722753 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.742905 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.761716 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.782736 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.801880 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.822306 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.842790 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.862295 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.882480 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.887809 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.902593 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.914572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.936479 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.945961 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.947306 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.960259 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.962252 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.966276 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.982654 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 17:18:53 crc kubenswrapper[4823]: I0121 17:18:53.986234 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.001899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.022916 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.035815 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.041899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.047565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.062338 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.064017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-policies\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.082298 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.083043 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.102530 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.105628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.128652 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.135975 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.142769 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.162297 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.182411 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.186967 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/168ed525-dbb5-4d4b-967f-0fadb7a7f53f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vzq92\" (UID: \"168ed525-dbb5-4d4b-967f-0fadb7a7f53f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.202649 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.222077 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.243353 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.262981 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.283273 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.302892 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.323150 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.343419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.362672 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.384292 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.388503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-proxy-tls\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.878377 4823 request.go:700] Waited for 1.283178411s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.894741 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.895596 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.895263 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.896744 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.898922 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.899081 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.899381 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.900302 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.900504 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.900716 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.902763 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.903090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psph\" (UniqueName: \"kubernetes.io/projected/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-kube-api-access-7psph\") pod \"controller-manager-879f6c89f-qxt5l\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.905141 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.906187 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.906458 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkn4\" (UniqueName: \"kubernetes.io/projected/a5fe5944-a8f6-47a3-9dc7-38f9d276848f-kube-api-access-4hkn4\") pod \"machine-approver-56656f9798-5v8pb\" (UID: \"a5fe5944-a8f6-47a3-9dc7-38f9d276848f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.908519 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92dl\" (UniqueName: \"kubernetes.io/projected/ac886837-67ac-48e7-b5cb-024a0ed1ea01-kube-api-access-q92dl\") pod \"console-f9d7485db-4dj4t\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.909881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9lqf\" (UniqueName: \"kubernetes.io/projected/36ae04e1-ee34-492a-b2af-012a3fb66740-kube-api-access-g9lqf\") pod \"machine-api-operator-5694c8668f-tnrwq\" (UID: \"36ae04e1-ee34-492a-b2af-012a3fb66740\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.917269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9lf\" (UniqueName: \"kubernetes.io/projected/eff795e9-5c79-4605-8065-56c14471445f-kube-api-access-sg9lf\") pod \"cluster-samples-operator-665b6dd947-vchb4\" (UID: \"eff795e9-5c79-4605-8065-56c14471445f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.919010 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td6x9\" (UniqueName: \"kubernetes.io/projected/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-kube-api-access-td6x9\") pod \"route-controller-manager-6576b87f9c-4jrzz\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.919337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfvj\" (UniqueName: \"kubernetes.io/projected/fa305dd8-20a7-4adb-9c38-1f4cee672164-kube-api-access-txfvj\") pod \"openshift-config-operator-7777fb866f-chzvq\" (UID: \"fa305dd8-20a7-4adb-9c38-1f4cee672164\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.920729 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59l8p\" (UniqueName: \"kubernetes.io/projected/e52e7f7b-367a-496e-8979-cf99488572e9-kube-api-access-59l8p\") pod \"console-operator-58897d9998-qdbwc\" (UID: \"e52e7f7b-367a-496e-8979-cf99488572e9\") " pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.921024 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9sr\" (UniqueName: \"kubernetes.io/projected/72b0a364-8275-4bc5-bfcf-744d1caf430e-kube-api-access-tb9sr\") pod \"apiserver-7bbb656c7d-g82c4\" (UID: \"72b0a364-8275-4bc5-bfcf-744d1caf430e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.923818 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.925823 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcvqn\" (UniqueName: \"kubernetes.io/projected/c474281f-3344-4846-9dd2-b78f8c3b7145-kube-api-access-hcvqn\") pod \"downloads-7954f5f757-jmjt6\" (UID: \"c474281f-3344-4846-9dd2-b78f8c3b7145\") " pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.926210 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv92n\" (UniqueName: \"kubernetes.io/projected/baa89460-c316-4b9f-9060-44ad75e28e05-kube-api-access-wv92n\") pod \"openshift-controller-manager-operator-756b6f6bc6-44lct\" (UID: \"baa89460-c316-4b9f-9060-44ad75e28e05\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.942219 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.955175 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.962442 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.971038 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.981723 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:18:54 crc kubenswrapper[4823]: I0121 17:18:54.982166 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:54 crc kubenswrapper[4823]: W0121 17:18:54.984940 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5fe5944_a8f6_47a3_9dc7_38f9d276848f.slice/crio-e04ebb98f03a6bee8b950a9198166560421817e1f0b71b1e3f1e7f9e14bdd355 WatchSource:0}: Error finding container e04ebb98f03a6bee8b950a9198166560421817e1f0b71b1e3f1e7f9e14bdd355: Status 404 returned error can't find the container with id e04ebb98f03a6bee8b950a9198166560421817e1f0b71b1e3f1e7f9e14bdd355 Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.002915 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.021458 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.022275 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.042904 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.049019 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.057976 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.062677 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.076055 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.097485 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.099275 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.106316 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.130166 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.135121 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.141961 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.143101 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.151302 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.162315 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.170285 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qxt5l"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.182900 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.208972 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.224995 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.242301 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.262439 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.286540 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.305750 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.324139 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.342676 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.362512 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.395993 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.401903 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.421972 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.431578 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jmjt6"] Jan 21 17:18:55 crc kubenswrapper[4823]: W0121 17:18:55.439336 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc474281f_3344_4846_9dd2_b78f8c3b7145.slice/crio-fdcd26856401ca102acb95d789533b35e43724580ecf7e43d55d45d8be71e509 WatchSource:0}: Error finding container fdcd26856401ca102acb95d789533b35e43724580ecf7e43d55d45d8be71e509: Status 404 returned error can't find the container with id fdcd26856401ca102acb95d789533b35e43724580ecf7e43d55d45d8be71e509 Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.441384 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.446528 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4dj4t"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.463788 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.470983 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-chzvq"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.474249 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.482255 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.502208 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.522138 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.542573 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.561243 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.563373 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.565586 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tnrwq"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.582605 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.597890 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.603242 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.604301 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qdbwc"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.612256 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4"] Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.621801 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.642570 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.661947 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.682580 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.703219 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.721831 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.766329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg4hm\" (UniqueName: \"kubernetes.io/projected/168ed525-dbb5-4d4b-967f-0fadb7a7f53f-kube-api-access-pg4hm\") pod \"multus-admission-controller-857f4d67dd-vzq92\" (UID: \"168ed525-dbb5-4d4b-967f-0fadb7a7f53f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.785568 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t44bf\" (UniqueName: \"kubernetes.io/projected/7f3cfefd-b182-4468-a496-dd3c08e72508-kube-api-access-t44bf\") pod \"dns-operator-744455d44c-dc99l\" (UID: \"7f3cfefd-b182-4468-a496-dd3c08e72508\") " pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.788705 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.804765 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw2rp\" (UniqueName: \"kubernetes.io/projected/ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10-kube-api-access-tw2rp\") pod \"apiserver-76f77b778f-5g629\" (UID: \"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10\") " pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.830834 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/b4c39836-a956-4dac-a063-a9a8aadfff84-kube-api-access-p7nb9\") pod \"etcd-operator-b45778765-wzpd6\" (UID: \"b4c39836-a956-4dac-a063-a9a8aadfff84\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.841967 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xm6\" (UniqueName: \"kubernetes.io/projected/c9274abf-35a8-4c79-9995-142413907ffc-kube-api-access-x2xm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-k6v4k\" (UID: \"c9274abf-35a8-4c79-9995-142413907ffc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.865653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm629\" (UniqueName: \"kubernetes.io/projected/2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c-kube-api-access-lm629\") pod \"machine-config-controller-84d6567774-z797g\" (UID: \"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.879951 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.880290 4823 request.go:700] Waited for 2.186511485s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.881708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wc44\" (UniqueName: \"kubernetes.io/projected/2e548be0-d2ab-4bad-a18c-cbe203dbb314-kube-api-access-9wc44\") pod \"marketplace-operator-79b997595-vw29d\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.891507 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" event={"ID":"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620","Type":"ContainerStarted","Data":"207f283f7ec34ffb6dfb68141ef977a211644b469bb7e92082cdeca935b7bc61"} Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.892490 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" event={"ID":"a5fe5944-a8f6-47a3-9dc7-38f9d276848f","Type":"ContainerStarted","Data":"e04ebb98f03a6bee8b950a9198166560421817e1f0b71b1e3f1e7f9e14bdd355"} Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.893302 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jmjt6" event={"ID":"c474281f-3344-4846-9dd2-b78f8c3b7145","Type":"ContainerStarted","Data":"fdcd26856401ca102acb95d789533b35e43724580ecf7e43d55d45d8be71e509"} Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.900373 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7cn2\" (UniqueName: \"kubernetes.io/projected/fb17a243-ae06-4606-836d-52a1a620bfe6-kube-api-access-g7cn2\") pod \"openshift-apiserver-operator-796bbdcf4f-z5hx4\" (UID: \"fb17a243-ae06-4606-836d-52a1a620bfe6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.903332 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.919062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5n8k\" (UniqueName: \"kubernetes.io/projected/cb80c5ee-bf4c-4eea-be92-0852730ef914-kube-api-access-l5n8k\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.937626 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb80c5ee-bf4c-4eea-be92-0852730ef914-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wrc7m\" (UID: \"cb80c5ee-bf4c-4eea-be92-0852730ef914\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:55 crc kubenswrapper[4823]: I0121 17:18:55.993199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvng\" (UniqueName: \"kubernetes.io/projected/b61875bd-8183-49ad-a085-7478b5b97ca8-kube-api-access-vbvng\") pod \"oauth-openshift-558db77b4-b8nkd\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:55 crc kubenswrapper[4823]: W0121 17:18:55.997788 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac886837_67ac_48e7_b5cb_024a0ed1ea01.slice/crio-0fca38632b59fddb582d07b781c01bee6f556d10b794276431bb546df00f553e WatchSource:0}: Error finding container 0fca38632b59fddb582d07b781c01bee6f556d10b794276431bb546df00f553e: Status 404 returned error can't find the container with id 0fca38632b59fddb582d07b781c01bee6f556d10b794276431bb546df00f553e Jan 21 17:18:56 crc kubenswrapper[4823]: W0121 17:18:56.000642 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa305dd8_20a7_4adb_9c38_1f4cee672164.slice/crio-4f8421a97d4275da5c4bf58b985fee976f0045db797ac61eb32c408b332b1362 WatchSource:0}: Error finding container 4f8421a97d4275da5c4bf58b985fee976f0045db797ac61eb32c408b332b1362: Status 404 returned error can't find the container with id 4f8421a97d4275da5c4bf58b985fee976f0045db797ac61eb32c408b332b1362 Jan 21 17:18:56 crc kubenswrapper[4823]: W0121 17:18:56.004061 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9647bb1_3272_4e92_8b16_ff16a90dfa8d.slice/crio-c2e19ef6f3bce0cf76c189d8bfef6d139a2c9d4f889c872e7da11743cfcdc749 WatchSource:0}: Error finding container c2e19ef6f3bce0cf76c189d8bfef6d139a2c9d4f889c872e7da11743cfcdc749: Status 404 returned error can't find the container with id c2e19ef6f3bce0cf76c189d8bfef6d139a2c9d4f889c872e7da11743cfcdc749 Jan 21 17:18:56 crc kubenswrapper[4823]: W0121 17:18:56.005425 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72b0a364_8275_4bc5_bfcf_744d1caf430e.slice/crio-c73327481757c8767aa8541cd9a3fd6097430f29c06582e4ff26ee46f23afb1f WatchSource:0}: Error finding container c73327481757c8767aa8541cd9a3fd6097430f29c06582e4ff26ee46f23afb1f: Status 404 returned error can't find the container with id c73327481757c8767aa8541cd9a3fd6097430f29c06582e4ff26ee46f23afb1f Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008440 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc7f0194-5611-45d4-b127-1dafa0f1fe76-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2b4cce-27bb-496d-8766-7724e90ab8ca-proxy-tls\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008518 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008549 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e2b4cce-27bb-496d-8766-7724e90ab8ca-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008573 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhq7q\" (UniqueName: \"kubernetes.io/projected/18f96219-7979-4162-8a45-7439bdb10075-kube-api-access-vhq7q\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49ca3409-f1c5-41e7-aabe-3382b23fd48c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-tls\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008638 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-trusted-ca\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8tf\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-kube-api-access-fj8tf\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008670 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5px5\" (UniqueName: \"kubernetes.io/projected/e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e-kube-api-access-d5px5\") pod \"migrator-59844c95c7-wlp2h\" (UID: \"e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7853d878-1dc8-4986-b9c8-e857f14a3230-srv-cert\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008768 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-metrics-certs\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008783 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7853d878-1dc8-4986-b9c8-e857f14a3230-profile-collector-cert\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49ca3409-f1c5-41e7-aabe-3382b23fd48c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008887 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-default-certificate\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-bound-sa-token\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008920 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7f0194-5611-45d4-b127-1dafa0f1fe76-config\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.008935 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vg6\" (UniqueName: \"kubernetes.io/projected/7853d878-1dc8-4986-b9c8-e857f14a3230-kube-api-access-w5vg6\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.009021 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w42pm\" (UniqueName: \"kubernetes.io/projected/0e2b4cce-27bb-496d-8766-7724e90ab8ca-kube-api-access-w42pm\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.009060 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e2b4cce-27bb-496d-8766-7724e90ab8ca-images\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.009079 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f96219-7979-4162-8a45-7439bdb10075-service-ca-bundle\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.009097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-stats-auth\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.009118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7f0194-5611-45d4-b127-1dafa0f1fe76-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.009168 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-certificates\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.011193 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:56.511172226 +0000 UTC m=+137.437303156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: W0121 17:18:56.022461 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode52e7f7b_367a_496e_8979_cf99488572e9.slice/crio-85b1d19da9e8582718689423e155762dc0b6cfcacfd431dc577ab0a841a1cfaa WatchSource:0}: Error finding container 85b1d19da9e8582718689423e155762dc0b6cfcacfd431dc577ab0a841a1cfaa: Status 404 returned error can't find the container with id 85b1d19da9e8582718689423e155762dc0b6cfcacfd431dc577ab0a841a1cfaa Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.061765 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.072291 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.078920 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.109587 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.109726 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:56.609703791 +0000 UTC m=+137.535834651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.109899 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs26h\" (UniqueName: \"kubernetes.io/projected/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-kube-api-access-fs26h\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.109948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3536b7cb-5def-4468-9282-897a30251cd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/481b8384-d11b-4bcb-9705-00065afa020f-signing-cabundle\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/096622f1-acd1-42ea-af9c-d43158bccf6c-metrics-tls\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpx8l\" (UniqueName: \"kubernetes.io/projected/096622f1-acd1-42ea-af9c-d43158bccf6c-kube-api-access-hpx8l\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110097 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-default-certificate\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110159 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-bound-sa-token\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110179 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-config\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110554 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7f0194-5611-45d4-b127-1dafa0f1fe76-config\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vg6\" (UniqueName: \"kubernetes.io/projected/7853d878-1dc8-4986-b9c8-e857f14a3230-kube-api-access-w5vg6\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110651 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f78636-4c9d-48e0-869d-dbfd7a83eace-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f78636-4c9d-48e0-869d-dbfd7a83eace-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110695 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29fae590-dcbb-4d4c-8941-ad734c10dfef-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.110843 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jlzv\" (UniqueName: \"kubernetes.io/projected/423ca236-207d-44cf-91f5-bdafb1c778a5-kube-api-access-6jlzv\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111047 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbzt\" (UniqueName: \"kubernetes.io/projected/52c8e75a-d225-4a9a-85a4-783998a290df-kube-api-access-qsbzt\") pod \"ingress-canary-rmxv5\" (UID: \"52c8e75a-d225-4a9a-85a4-783998a290df\") " pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111083 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52c8e75a-d225-4a9a-85a4-783998a290df-cert\") pod \"ingress-canary-rmxv5\" (UID: \"52c8e75a-d225-4a9a-85a4-783998a290df\") " pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w42pm\" (UniqueName: \"kubernetes.io/projected/0e2b4cce-27bb-496d-8766-7724e90ab8ca-kube-api-access-w42pm\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111168 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450e2dbe-320e-45fa-8122-26b905dfb601-config-volume\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111189 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df1c9fc3-bc84-4d96-b34e-c059587cffc7-certs\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111211 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e2b4cce-27bb-496d-8766-7724e90ab8ca-images\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-stats-auth\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7f0194-5611-45d4-b127-1dafa0f1fe76-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f96219-7979-4162-8a45-7439bdb10075-service-ca-bundle\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111367 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-certificates\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111476 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbm6r\" (UniqueName: \"kubernetes.io/projected/450e2dbe-320e-45fa-8122-26b905dfb601-kube-api-access-lbm6r\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1438d22a-c19a-427d-b1b5-02cbf2675461-profile-collector-cert\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111557 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-trusted-ca\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111578 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/481b8384-d11b-4bcb-9705-00065afa020f-signing-key\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111630 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f84nb\" (UniqueName: \"kubernetes.io/projected/1438d22a-c19a-427d-b1b5-02cbf2675461-kube-api-access-f84nb\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.111791 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7f0194-5611-45d4-b127-1dafa0f1fe76-config\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.112390 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f96219-7979-4162-8a45-7439bdb10075-service-ca-bundle\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e2b4cce-27bb-496d-8766-7724e90ab8ca-images\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113217 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df1c9fc3-bc84-4d96-b34e-c059587cffc7-node-bootstrap-token\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113250 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37f78636-4c9d-48e0-869d-dbfd7a83eace-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113271 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc7f0194-5611-45d4-b127-1dafa0f1fe76-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fae590-dcbb-4d4c-8941-ad734c10dfef-config\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2b4cce-27bb-496d-8766-7724e90ab8ca-proxy-tls\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113410 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-serving-cert\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113435 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9g2\" (UniqueName: \"kubernetes.io/projected/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-kube-api-access-5b9g2\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113457 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a64c737-acdd-4653-ad52-bcf69b3b69f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghcjn\" (UID: \"7a64c737-acdd-4653-ad52-bcf69b3b69f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113479 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkht8\" (UniqueName: \"kubernetes.io/projected/3536b7cb-5def-4468-9282-897a30251cd4-kube-api-access-kkht8\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113511 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfds\" (UniqueName: \"kubernetes.io/projected/7e7a4213-8205-498b-8390-506f5f273557-kube-api-access-4vfds\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579gd\" (UniqueName: \"kubernetes.io/projected/7a64c737-acdd-4653-ad52-bcf69b3b69f8-kube-api-access-579gd\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghcjn\" (UID: \"7a64c737-acdd-4653-ad52-bcf69b3b69f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113610 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-mountpoint-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113630 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-csi-data-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e2b4cce-27bb-496d-8766-7724e90ab8ca-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjc8d\" (UniqueName: \"kubernetes.io/projected/481b8384-d11b-4bcb-9705-00065afa020f-kube-api-access-fjc8d\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-certificates\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhq7q\" (UniqueName: \"kubernetes.io/projected/18f96219-7979-4162-8a45-7439bdb10075-kube-api-access-vhq7q\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113761 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49ca3409-f1c5-41e7-aabe-3382b23fd48c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-socket-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113810 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/096622f1-acd1-42ea-af9c-d43158bccf6c-config-volume\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113837 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-plugins-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113879 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-tls\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113905 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhbg\" (UniqueName: \"kubernetes.io/projected/df1c9fc3-bc84-4d96-b34e-c059587cffc7-kube-api-access-fvhbg\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113934 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113954 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1438d22a-c19a-427d-b1b5-02cbf2675461-srv-cert\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8tf\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-kube-api-access-fj8tf\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.113993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5px5\" (UniqueName: \"kubernetes.io/projected/e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e-kube-api-access-d5px5\") pod \"migrator-59844c95c7-wlp2h\" (UID: \"e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114173 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-default-certificate\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-trusted-ca\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114354 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frnf7\" (UniqueName: \"kubernetes.io/projected/296482bb-604f-44b4-be2f-2dab346045ce-kube-api-access-frnf7\") pod \"package-server-manager-789f6589d5-knqcz\" (UID: \"296482bb-604f-44b4-be2f-2dab346045ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114403 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-registration-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114493 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423ca236-207d-44cf-91f5-bdafb1c778a5-serving-cert\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114532 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3536b7cb-5def-4468-9282-897a30251cd4-tmpfs\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114554 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7853d878-1dc8-4986-b9c8-e857f14a3230-srv-cert\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114578 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/296482bb-604f-44b4-be2f-2dab346045ce-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-knqcz\" (UID: \"296482bb-604f-44b4-be2f-2dab346045ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-metrics-tls\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114638 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423ca236-207d-44cf-91f5-bdafb1c778a5-config\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114661 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450e2dbe-320e-45fa-8122-26b905dfb601-secret-volume\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114710 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3536b7cb-5def-4468-9282-897a30251cd4-webhook-cert\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.114887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e2b4cce-27bb-496d-8766-7724e90ab8ca-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.115097 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:56.615081447 +0000 UTC m=+137.541212307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.122804 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-stats-auth\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.123051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49ca3409-f1c5-41e7-aabe-3382b23fd48c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.123590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc7f0194-5611-45d4-b127-1dafa0f1fe76-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.123727 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e2b4cce-27bb-496d-8766-7724e90ab8ca-proxy-tls\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.123909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-metrics-certs\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.123963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7853d878-1dc8-4986-b9c8-e857f14a3230-profile-collector-cert\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.127206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49ca3409-f1c5-41e7-aabe-3382b23fd48c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.129213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29fae590-dcbb-4d4c-8941-ad734c10dfef-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.127593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-tls\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.127215 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.131916 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49ca3409-f1c5-41e7-aabe-3382b23fd48c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.132181 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.133035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-trusted-ca\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.135533 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7853d878-1dc8-4986-b9c8-e857f14a3230-profile-collector-cert\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.138535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7853d878-1dc8-4986-b9c8-e857f14a3230-srv-cert\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.139720 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f96219-7979-4162-8a45-7439bdb10075-metrics-certs\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.141876 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.150165 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-bound-sa-token\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.160991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vg6\" (UniqueName: \"kubernetes.io/projected/7853d878-1dc8-4986-b9c8-e857f14a3230-kube-api-access-w5vg6\") pod \"olm-operator-6b444d44fb-blkpl\" (UID: \"7853d878-1dc8-4986-b9c8-e857f14a3230\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.172250 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.181050 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5px5\" (UniqueName: \"kubernetes.io/projected/e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e-kube-api-access-d5px5\") pod \"migrator-59844c95c7-wlp2h\" (UID: \"e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.200209 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230464 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f84nb\" (UniqueName: \"kubernetes.io/projected/1438d22a-c19a-427d-b1b5-02cbf2675461-kube-api-access-f84nb\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230661 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df1c9fc3-bc84-4d96-b34e-c059587cffc7-node-bootstrap-token\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230679 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37f78636-4c9d-48e0-869d-dbfd7a83eace-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230697 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fae590-dcbb-4d4c-8941-ad734c10dfef-config\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230729 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-serving-cert\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230746 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b9g2\" (UniqueName: \"kubernetes.io/projected/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-kube-api-access-5b9g2\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230763 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a64c737-acdd-4653-ad52-bcf69b3b69f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghcjn\" (UID: \"7a64c737-acdd-4653-ad52-bcf69b3b69f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkht8\" (UniqueName: \"kubernetes.io/projected/3536b7cb-5def-4468-9282-897a30251cd4-kube-api-access-kkht8\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230801 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfds\" (UniqueName: \"kubernetes.io/projected/7e7a4213-8205-498b-8390-506f5f273557-kube-api-access-4vfds\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230825 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579gd\" (UniqueName: \"kubernetes.io/projected/7a64c737-acdd-4653-ad52-bcf69b3b69f8-kube-api-access-579gd\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghcjn\" (UID: \"7a64c737-acdd-4653-ad52-bcf69b3b69f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230841 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-mountpoint-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230870 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-csi-data-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230898 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjc8d\" (UniqueName: \"kubernetes.io/projected/481b8384-d11b-4bcb-9705-00065afa020f-kube-api-access-fjc8d\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230914 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-socket-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230931 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/096622f1-acd1-42ea-af9c-d43158bccf6c-config-volume\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230948 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-plugins-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhbg\" (UniqueName: \"kubernetes.io/projected/df1c9fc3-bc84-4d96-b34e-c059587cffc7-kube-api-access-fvhbg\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.230986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231005 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1438d22a-c19a-427d-b1b5-02cbf2675461-srv-cert\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frnf7\" (UniqueName: \"kubernetes.io/projected/296482bb-604f-44b4-be2f-2dab346045ce-kube-api-access-frnf7\") pod \"package-server-manager-789f6589d5-knqcz\" (UID: \"296482bb-604f-44b4-be2f-2dab346045ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-registration-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423ca236-207d-44cf-91f5-bdafb1c778a5-serving-cert\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3536b7cb-5def-4468-9282-897a30251cd4-tmpfs\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231094 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/296482bb-604f-44b4-be2f-2dab346045ce-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-knqcz\" (UID: \"296482bb-604f-44b4-be2f-2dab346045ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-metrics-tls\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231124 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450e2dbe-320e-45fa-8122-26b905dfb601-secret-volume\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231140 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423ca236-207d-44cf-91f5-bdafb1c778a5-config\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.231174 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:56.731148496 +0000 UTC m=+137.657279416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3536b7cb-5def-4468-9282-897a30251cd4-webhook-cert\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29fae590-dcbb-4d4c-8941-ad734c10dfef-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs26h\" (UniqueName: \"kubernetes.io/projected/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-kube-api-access-fs26h\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3536b7cb-5def-4468-9282-897a30251cd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231414 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/481b8384-d11b-4bcb-9705-00065afa020f-signing-cabundle\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231439 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/096622f1-acd1-42ea-af9c-d43158bccf6c-metrics-tls\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpx8l\" (UniqueName: \"kubernetes.io/projected/096622f1-acd1-42ea-af9c-d43158bccf6c-kube-api-access-hpx8l\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-config\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231547 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29fae590-dcbb-4d4c-8941-ad734c10dfef-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f78636-4c9d-48e0-869d-dbfd7a83eace-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231603 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f78636-4c9d-48e0-869d-dbfd7a83eace-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231632 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jlzv\" (UniqueName: \"kubernetes.io/projected/423ca236-207d-44cf-91f5-bdafb1c778a5-kube-api-access-6jlzv\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbzt\" (UniqueName: \"kubernetes.io/projected/52c8e75a-d225-4a9a-85a4-783998a290df-kube-api-access-qsbzt\") pod \"ingress-canary-rmxv5\" (UID: \"52c8e75a-d225-4a9a-85a4-783998a290df\") " pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231679 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52c8e75a-d225-4a9a-85a4-783998a290df-cert\") pod \"ingress-canary-rmxv5\" (UID: \"52c8e75a-d225-4a9a-85a4-783998a290df\") " pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231730 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450e2dbe-320e-45fa-8122-26b905dfb601-config-volume\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231750 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df1c9fc3-bc84-4d96-b34e-c059587cffc7-certs\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423ca236-207d-44cf-91f5-bdafb1c778a5-config\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231816 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbm6r\" (UniqueName: \"kubernetes.io/projected/450e2dbe-320e-45fa-8122-26b905dfb601-kube-api-access-lbm6r\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231877 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-trusted-ca\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1438d22a-c19a-427d-b1b5-02cbf2675461-profile-collector-cert\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.231927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/481b8384-d11b-4bcb-9705-00065afa020f-signing-key\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.232917 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-mountpoint-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.233008 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-csi-data-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.233337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-socket-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.233541 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7f0194-5611-45d4-b127-1dafa0f1fe76-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k2ppk\" (UID: \"fc7f0194-5611-45d4-b127-1dafa0f1fe76\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.233596 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-plugins-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.233999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/096622f1-acd1-42ea-af9c-d43158bccf6c-config-volume\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.234234 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-config\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.236102 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f78636-4c9d-48e0-869d-dbfd7a83eace-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.237169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df1c9fc3-bc84-4d96-b34e-c059587cffc7-node-bootstrap-token\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.237581 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.237960 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/481b8384-d11b-4bcb-9705-00065afa020f-signing-cabundle\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.244440 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fae590-dcbb-4d4c-8941-ad734c10dfef-config\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.244742 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/296482bb-604f-44b4-be2f-2dab346045ce-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-knqcz\" (UID: \"296482bb-604f-44b4-be2f-2dab346045ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.245163 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3536b7cb-5def-4468-9282-897a30251cd4-tmpfs\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.245227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-serving-cert\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.245435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3536b7cb-5def-4468-9282-897a30251cd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.245918 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3536b7cb-5def-4468-9282-897a30251cd4-webhook-cert\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.245986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e7a4213-8205-498b-8390-506f5f273557-registration-dir\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.246048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.246486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a64c737-acdd-4653-ad52-bcf69b3b69f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghcjn\" (UID: \"7a64c737-acdd-4653-ad52-bcf69b3b69f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.246973 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-trusted-ca\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.247120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/481b8384-d11b-4bcb-9705-00065afa020f-signing-key\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.247872 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29fae590-dcbb-4d4c-8941-ad734c10dfef-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.248046 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450e2dbe-320e-45fa-8122-26b905dfb601-config-volume\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.249185 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w42pm\" (UniqueName: \"kubernetes.io/projected/0e2b4cce-27bb-496d-8766-7724e90ab8ca-kube-api-access-w42pm\") pod \"machine-config-operator-74547568cd-kztl2\" (UID: \"0e2b4cce-27bb-496d-8766-7724e90ab8ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.250044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/096622f1-acd1-42ea-af9c-d43158bccf6c-metrics-tls\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.252348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1438d22a-c19a-427d-b1b5-02cbf2675461-profile-collector-cert\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.252566 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1438d22a-c19a-427d-b1b5-02cbf2675461-srv-cert\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.253341 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-metrics-tls\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.253566 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423ca236-207d-44cf-91f5-bdafb1c778a5-serving-cert\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.257243 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z797g"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.257752 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52c8e75a-d225-4a9a-85a4-783998a290df-cert\") pod \"ingress-canary-rmxv5\" (UID: \"52c8e75a-d225-4a9a-85a4-783998a290df\") " pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.258004 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450e2dbe-320e-45fa-8122-26b905dfb601-secret-volume\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.259780 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df1c9fc3-bc84-4d96-b34e-c059587cffc7-certs\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.263380 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37f78636-4c9d-48e0-869d-dbfd7a83eace-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.270972 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhq7q\" (UniqueName: \"kubernetes.io/projected/18f96219-7979-4162-8a45-7439bdb10075-kube-api-access-vhq7q\") pod \"router-default-5444994796-vttrq\" (UID: \"18f96219-7979-4162-8a45-7439bdb10075\") " pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.293369 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8tf\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-kube-api-access-fj8tf\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.323912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b9g2\" (UniqueName: \"kubernetes.io/projected/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-kube-api-access-5b9g2\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.333095 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.333573 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:56.833553469 +0000 UTC m=+137.759684329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.358172 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhbg\" (UniqueName: \"kubernetes.io/projected/df1c9fc3-bc84-4d96-b34e-c059587cffc7-kube-api-access-fvhbg\") pod \"machine-config-server-4qqzs\" (UID: \"df1c9fc3-bc84-4d96-b34e-c059587cffc7\") " pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.360899 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkht8\" (UniqueName: \"kubernetes.io/projected/3536b7cb-5def-4468-9282-897a30251cd4-kube-api-access-kkht8\") pod \"packageserver-d55dfcdfc-wml2h\" (UID: \"3536b7cb-5def-4468-9282-897a30251cd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.400821 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfds\" (UniqueName: \"kubernetes.io/projected/7e7a4213-8205-498b-8390-506f5f273557-kube-api-access-4vfds\") pod \"csi-hostpathplugin-n4ttw\" (UID: \"7e7a4213-8205-498b-8390-506f5f273557\") " pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.408119 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579gd\" (UniqueName: \"kubernetes.io/projected/7a64c737-acdd-4653-ad52-bcf69b3b69f8-kube-api-access-579gd\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghcjn\" (UID: \"7a64c737-acdd-4653-ad52-bcf69b3b69f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.424329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpx8l\" (UniqueName: \"kubernetes.io/projected/096622f1-acd1-42ea-af9c-d43158bccf6c-kube-api-access-hpx8l\") pod \"dns-default-nxscj\" (UID: \"096622f1-acd1-42ea-af9c-d43158bccf6c\") " pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.436329 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.436841 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:56.936825724 +0000 UTC m=+137.862956584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.448196 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.450489 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f84nb\" (UniqueName: \"kubernetes.io/projected/1438d22a-c19a-427d-b1b5-02cbf2675461-kube-api-access-f84nb\") pod \"catalog-operator-68c6474976-qsplv\" (UID: \"1438d22a-c19a-427d-b1b5-02cbf2675461\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.455317 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.462868 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjc8d\" (UniqueName: \"kubernetes.io/projected/481b8384-d11b-4bcb-9705-00065afa020f-kube-api-access-fjc8d\") pod \"service-ca-9c57cc56f-q5pd7\" (UID: \"481b8384-d11b-4bcb-9705-00065afa020f\") " pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.463489 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.490055 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbzt\" (UniqueName: \"kubernetes.io/projected/52c8e75a-d225-4a9a-85a4-783998a290df-kube-api-access-qsbzt\") pod \"ingress-canary-rmxv5\" (UID: \"52c8e75a-d225-4a9a-85a4-783998a290df\") " pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.501171 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5g629"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.512716 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" Jan 21 17:18:56 crc kubenswrapper[4823]: W0121 17:18:56.523110 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2d5770_8ba0_4dc7_bdc0_f217cbd4da10.slice/crio-dde4f8799e6188b057170d9d051808e890a1930670b8f272e8602718df11f230 WatchSource:0}: Error finding container dde4f8799e6188b057170d9d051808e890a1930670b8f272e8602718df11f230: Status 404 returned error can't find the container with id dde4f8799e6188b057170d9d051808e890a1930670b8f272e8602718df11f230 Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.537912 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vzq92"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.538206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.538537 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.038526779 +0000 UTC m=+137.964657639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.540331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6cdf8e9-64f0-4e78-99cb-94f0affbd11b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6h6rf\" (UID: \"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.544404 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f78636-4c9d-48e0-869d-dbfd7a83eace-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mc68\" (UID: \"37f78636-4c9d-48e0-869d-dbfd7a83eace\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.547246 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.550417 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jlzv\" (UniqueName: \"kubernetes.io/projected/423ca236-207d-44cf-91f5-bdafb1c778a5-kube-api-access-6jlzv\") pod \"service-ca-operator-777779d784-jmn4h\" (UID: \"423ca236-207d-44cf-91f5-bdafb1c778a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.557900 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dc99l"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.562521 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29fae590-dcbb-4d4c-8941-ad734c10dfef-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2c287\" (UID: \"29fae590-dcbb-4d4c-8941-ad734c10dfef\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.577755 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.586655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs26h\" (UniqueName: \"kubernetes.io/projected/fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011-kube-api-access-fs26h\") pod \"authentication-operator-69f744f599-dhlfg\" (UID: \"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.587191 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rmxv5" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.596388 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.600624 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbm6r\" (UniqueName: \"kubernetes.io/projected/450e2dbe-320e-45fa-8122-26b905dfb601-kube-api-access-lbm6r\") pod \"collect-profiles-29483595-ksvrx\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.605677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.611992 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.620713 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.628833 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.635717 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vw29d"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.639914 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.640348 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.140328076 +0000 UTC m=+138.066458936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.642425 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8nkd"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.649571 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frnf7\" (UniqueName: \"kubernetes.io/projected/296482bb-604f-44b4-be2f-2dab346045ce-kube-api-access-frnf7\") pod \"package-server-manager-789f6589d5-knqcz\" (UID: \"296482bb-604f-44b4-be2f-2dab346045ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.671016 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.671669 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4qqzs" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.692417 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nxscj" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.692775 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.742578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.742931 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.242917524 +0000 UTC m=+138.169048384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.769654 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.822119 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.829382 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.833381 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.843441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.843616 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.343592853 +0000 UTC m=+138.269723713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.843800 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.844193 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.344186738 +0000 UTC m=+138.270317598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.863891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.890630 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.901764 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" event={"ID":"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c","Type":"ContainerStarted","Data":"e7d3ce2e15f6437d4bc001e5d800138af0167fa49ee40ed82d1adb51c433af81"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.904017 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" event={"ID":"baa89460-c316-4b9f-9060-44ad75e28e05","Type":"ContainerStarted","Data":"d2ffb6e5ee39e9717ad92d3ae600e6953d400f1334c9b9681b3f269e4f1dc43d"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.904040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" event={"ID":"baa89460-c316-4b9f-9060-44ad75e28e05","Type":"ContainerStarted","Data":"9838429dfb5927f17f43e690261fe8b71f45616f53cb6beb54ca1154c68fb9c2"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.905574 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" event={"ID":"36ae04e1-ee34-492a-b2af-012a3fb66740","Type":"ContainerStarted","Data":"7e8fad7930f146357a5917504bdbcf6a4acadbe632884d48abf6e9a52f1b4148"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.905620 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" event={"ID":"36ae04e1-ee34-492a-b2af-012a3fb66740","Type":"ContainerStarted","Data":"c7606496399e874a07eb504b614f6f567a46c46920651234e637985baeab5240"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.908153 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" event={"ID":"c9647bb1-3272-4e92-8b16-ff16a90dfa8d","Type":"ContainerStarted","Data":"855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.908182 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" event={"ID":"c9647bb1-3272-4e92-8b16-ff16a90dfa8d","Type":"ContainerStarted","Data":"c2e19ef6f3bce0cf76c189d8bfef6d139a2c9d4f889c872e7da11743cfcdc749"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.915460 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" event={"ID":"7f3cfefd-b182-4468-a496-dd3c08e72508","Type":"ContainerStarted","Data":"ffdd6240627294e22f2262f98117089cbd08b0e98e2b478d5e44631f3d0600bd"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.917296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4dj4t" event={"ID":"ac886837-67ac-48e7-b5cb-024a0ed1ea01","Type":"ContainerStarted","Data":"07d73f41bcc4e03bb8243a1e7bd5640de55ffcfba9a25714c8c90f58538cb19c"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.917321 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4dj4t" event={"ID":"ac886837-67ac-48e7-b5cb-024a0ed1ea01","Type":"ContainerStarted","Data":"0fca38632b59fddb582d07b781c01bee6f556d10b794276431bb546df00f553e"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.942180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" event={"ID":"2e548be0-d2ab-4bad-a18c-cbe203dbb314","Type":"ContainerStarted","Data":"9eea56627cf1529f1c399653f06549a40da4be6abf8570ebed23566e72d324dc"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.944812 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.946224 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.446204772 +0000 UTC m=+138.372335632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.946324 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" event={"ID":"168ed525-dbb5-4d4b-967f-0fadb7a7f53f","Type":"ContainerStarted","Data":"51bbf1db2fde6f8d402bada01b33b90ff846013e9af5149ad965842e904d1f4f"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.947653 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5g629" event={"ID":"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10","Type":"ContainerStarted","Data":"dde4f8799e6188b057170d9d051808e890a1930670b8f272e8602718df11f230"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.947990 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:56 crc kubenswrapper[4823]: E0121 17:18:56.948477 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.448460289 +0000 UTC m=+138.374591199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.962822 4823 generic.go:334] "Generic (PLEG): container finished" podID="fa305dd8-20a7-4adb-9c38-1f4cee672164" containerID="c41c6a101ced32f3c4328297a376188480f12b1616c8096fac6e3d83d5d7766b" exitCode=0 Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.962955 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" event={"ID":"fa305dd8-20a7-4adb-9c38-1f4cee672164","Type":"ContainerDied","Data":"c41c6a101ced32f3c4328297a376188480f12b1616c8096fac6e3d83d5d7766b"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.962983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" event={"ID":"fa305dd8-20a7-4adb-9c38-1f4cee672164","Type":"ContainerStarted","Data":"4f8421a97d4275da5c4bf58b985fee976f0045db797ac61eb32c408b332b1362"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.972631 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" event={"ID":"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620","Type":"ContainerStarted","Data":"f8994a7e04ae7d47874e0f19d7d148f231b47a859a38b70af6d78359324a0508"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.973171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.975395 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" event={"ID":"b61875bd-8183-49ad-a085-7478b5b97ca8","Type":"ContainerStarted","Data":"65587d7dd1c11e78502a42457e924c45571e53127404641ae04c131e1bd9c3ca"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.977183 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m"] Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.978794 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" event={"ID":"e52e7f7b-367a-496e-8979-cf99488572e9","Type":"ContainerStarted","Data":"b305a6cabea5643d8eeda8ba18088f2fe37ca557ec2a9345d1392efa20daf2bf"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.978818 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" event={"ID":"e52e7f7b-367a-496e-8979-cf99488572e9","Type":"ContainerStarted","Data":"85b1d19da9e8582718689423e155762dc0b6cfcacfd431dc577ab0a841a1cfaa"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.979481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.980494 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" event={"ID":"72b0a364-8275-4bc5-bfcf-744d1caf430e","Type":"ContainerStarted","Data":"c73327481757c8767aa8541cd9a3fd6097430f29c06582e4ff26ee46f23afb1f"} Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.986037 4823 patch_prober.go:28] interesting pod/console-operator-58897d9998-qdbwc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.986082 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" podUID="e52e7f7b-367a-496e-8979-cf99488572e9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.986225 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:18:56 crc kubenswrapper[4823]: I0121 17:18:56.989792 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" event={"ID":"a5fe5944-a8f6-47a3-9dc7-38f9d276848f","Type":"ContainerStarted","Data":"7749ca8f3bde8f867e7545f0061d967a1c4100fd7da6d4d88ed7a032be074bd5"} Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.000620 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jmjt6" event={"ID":"c474281f-3344-4846-9dd2-b78f8c3b7145","Type":"ContainerStarted","Data":"fecfb5b15edeac924410d666f41d1bc2420d047e0bdd2f7d26dea3e9160e2b8d"} Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.001565 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.014693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vttrq" event={"ID":"18f96219-7979-4162-8a45-7439bdb10075","Type":"ContainerStarted","Data":"fcc3dedd3670e5dd01d6793226e6f197b435ea100c7a957bf96be9f296842d71"} Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.015402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" event={"ID":"eff795e9-5c79-4605-8065-56c14471445f","Type":"ContainerStarted","Data":"a3c2d1809f9539dbac227f1f869efa4dd88117753abb82cf102aa496143136a9"} Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.020869 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-jmjt6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.020915 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jmjt6" podUID="c474281f-3344-4846-9dd2-b78f8c3b7145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 21 17:18:57 crc kubenswrapper[4823]: W0121 17:18:57.033521 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf1c9fc3_bc84_4d96_b34e_c059587cffc7.slice/crio-93a822a2995fb7ed2964b489ec1d9b25c70dfee009ab22b511a38c7db4f28ed2 WatchSource:0}: Error finding container 93a822a2995fb7ed2964b489ec1d9b25c70dfee009ab22b511a38c7db4f28ed2: Status 404 returned error can't find the container with id 93a822a2995fb7ed2964b489ec1d9b25c70dfee009ab22b511a38c7db4f28ed2 Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.053054 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.054526 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.554504324 +0000 UTC m=+138.480635184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.156453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.157132 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.657116341 +0000 UTC m=+138.583247201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.174822 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wzpd6"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.252917 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.258091 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.258263 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.758238462 +0000 UTC m=+138.684369322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.258399 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.258717 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.758710094 +0000 UTC m=+138.684840954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.323008 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.361536 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.361989 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.861958998 +0000 UTC m=+138.788089858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.391591 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.434382 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rmxv5"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.463235 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.463575 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:57.963560151 +0000 UTC m=+138.889691011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.569187 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.572762 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.573189 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.073172256 +0000 UTC m=+138.999303116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.589477 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhlfg"] Jan 21 17:18:57 crc kubenswrapper[4823]: W0121 17:18:57.693631 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2b4cce_27bb_496d_8766_7724e90ab8ca.slice/crio-e8cf5b582981ddd1c5709c81f455c3b6eb5f3d38adb3dda070897388d2af3596 WatchSource:0}: Error finding container e8cf5b582981ddd1c5709c81f455c3b6eb5f3d38adb3dda070897388d2af3596: Status 404 returned error can't find the container with id e8cf5b582981ddd1c5709c81f455c3b6eb5f3d38adb3dda070897388d2af3596 Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.710219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.710666 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.210645377 +0000 UTC m=+139.136776237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.724007 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n4ttw"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.773588 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q5pd7"] Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.812141 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.812487 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.312473196 +0000 UTC m=+139.238604056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.913540 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:57 crc kubenswrapper[4823]: E0121 17:18:57.913925 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.413910514 +0000 UTC m=+139.340041374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.955032 4823 csr.go:261] certificate signing request csr-p25jn is approved, waiting to be issued Jan 21 17:18:57 crc kubenswrapper[4823]: I0121 17:18:57.962502 4823 csr.go:257] certificate signing request csr-p25jn is issued Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.018918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.019266 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.519251271 +0000 UTC m=+139.445382131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.040923 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.095621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" event={"ID":"fc7f0194-5611-45d4-b127-1dafa0f1fe76","Type":"ContainerStarted","Data":"42958fed6badb9fe98b653e220945232d24e162052e62758770252203234b901"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.105276 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" event={"ID":"c9274abf-35a8-4c79-9995-142413907ffc","Type":"ContainerStarted","Data":"b33b5652f96a5bf793867be046f290b6138967c5d35d6289267a9b7135da2576"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.119971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.120306 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.62029572 +0000 UTC m=+139.546426580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.141541 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" podStartSLOduration=120.141518847 podStartE2EDuration="2m0.141518847s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.128231811 +0000 UTC m=+139.054362671" watchObservedRunningTime="2026-01-21 17:18:58.141518847 +0000 UTC m=+139.067649707" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.150006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" event={"ID":"cb80c5ee-bf4c-4eea-be92-0852730ef914","Type":"ContainerStarted","Data":"6cf5d08782fe443e0e7d57360d314d779e1cbba0e0e3a9b714b4823a96a90547"} Jan 21 17:18:58 crc kubenswrapper[4823]: W0121 17:18:58.179773 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481b8384_d11b_4bcb_9705_00065afa020f.slice/crio-b26c564892f9fbde771ec49bd277889f8860fcf7ac2695ade65175c066a4c0d9 WatchSource:0}: Error finding container b26c564892f9fbde771ec49bd277889f8860fcf7ac2695ade65175c066a4c0d9: Status 404 returned error can't find the container with id b26c564892f9fbde771ec49bd277889f8860fcf7ac2695ade65175c066a4c0d9 Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.181558 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-44lct" podStartSLOduration=120.181540061 podStartE2EDuration="2m0.181540061s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.18071947 +0000 UTC m=+139.106850330" watchObservedRunningTime="2026-01-21 17:18:58.181540061 +0000 UTC m=+139.107670921" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.203536 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" podStartSLOduration=120.203499857 podStartE2EDuration="2m0.203499857s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.203007124 +0000 UTC m=+139.129138004" watchObservedRunningTime="2026-01-21 17:18:58.203499857 +0000 UTC m=+139.129630707" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.209669 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" event={"ID":"0e2b4cce-27bb-496d-8766-7724e90ab8ca","Type":"ContainerStarted","Data":"e8cf5b582981ddd1c5709c81f455c3b6eb5f3d38adb3dda070897388d2af3596"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.222321 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.222790 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.722774875 +0000 UTC m=+139.648905735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.301742 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jmjt6" podStartSLOduration=121.301728074 podStartE2EDuration="2m1.301728074s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.247480321 +0000 UTC m=+139.173611181" watchObservedRunningTime="2026-01-21 17:18:58.301728074 +0000 UTC m=+139.227858934" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.302618 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" podStartSLOduration=121.302612187 podStartE2EDuration="2m1.302612187s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.299909968 +0000 UTC m=+139.226040848" watchObservedRunningTime="2026-01-21 17:18:58.302612187 +0000 UTC m=+139.228743047" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.329601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.329978 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.829966199 +0000 UTC m=+139.756097059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.341685 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.343383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" event={"ID":"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c","Type":"ContainerStarted","Data":"384bc71b98e47ca25ccd5f46229472e6bbf9572e8569275b6ac3d085a8577a57"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.361930 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4dj4t" podStartSLOduration=121.361909878 podStartE2EDuration="2m1.361909878s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.354304696 +0000 UTC m=+139.280435566" watchObservedRunningTime="2026-01-21 17:18:58.361909878 +0000 UTC m=+139.288040738" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.373740 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.384684 4823 generic.go:334] "Generic (PLEG): container finished" podID="72b0a364-8275-4bc5-bfcf-744d1caf430e" containerID="94e47d7dffd976402e73b2d77ab22119f501f0dc32cada582c37cf6985ce7a2f" exitCode=0 Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.384772 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" event={"ID":"72b0a364-8275-4bc5-bfcf-744d1caf430e","Type":"ContainerDied","Data":"94e47d7dffd976402e73b2d77ab22119f501f0dc32cada582c37cf6985ce7a2f"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.426939 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" event={"ID":"7e7a4213-8205-498b-8390-506f5f273557","Type":"ContainerStarted","Data":"7335feff6d5ba59357ed2fbb10efc54b83c154d6d6cb7d611165f0380c567942"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.437625 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.438158 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:58.938135928 +0000 UTC m=+139.864266788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.450164 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nxscj"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.476442 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.507271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4qqzs" event={"ID":"df1c9fc3-bc84-4d96-b34e-c059587cffc7","Type":"ContainerStarted","Data":"93a822a2995fb7ed2964b489ec1d9b25c70dfee009ab22b511a38c7db4f28ed2"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.524266 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" event={"ID":"7853d878-1dc8-4986-b9c8-e857f14a3230","Type":"ContainerStarted","Data":"89c1284424c2552507f3e82dfe14dc7c1c55dcb785e8fb5c9cf4f78ae55f0a05"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.524595 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.528623 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-blkpl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.528695 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" podUID="7853d878-1dc8-4986-b9c8-e857f14a3230" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.540088 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.541833 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.041821654 +0000 UTC m=+139.967952514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.547805 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.553713 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" event={"ID":"a5fe5944-a8f6-47a3-9dc7-38f9d276848f","Type":"ContainerStarted","Data":"96fb654c521d0b9b03322e57b65184a3966f7cd2bfb34fc2fec1a23b196508ee"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.577325 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" event={"ID":"36ae04e1-ee34-492a-b2af-012a3fb66740","Type":"ContainerStarted","Data":"9bd91653bbbda9e872194fb58d31bb58ff18767c7145501290c12cf2d533987a"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.579673 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" podStartSLOduration=120.579650541 podStartE2EDuration="2m0.579650541s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.57444621 +0000 UTC m=+139.500577070" watchObservedRunningTime="2026-01-21 17:18:58.579650541 +0000 UTC m=+139.505781401" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.594176 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" event={"ID":"7a64c737-acdd-4653-ad52-bcf69b3b69f8","Type":"ContainerStarted","Data":"f412bff9ef7280c8c7be04d60a6701e31a62649d7ace06300a2802777c54cfcc"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.623754 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5v8pb" podStartSLOduration=121.623732648 podStartE2EDuration="2m1.623732648s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.607365313 +0000 UTC m=+139.533496183" watchObservedRunningTime="2026-01-21 17:18:58.623732648 +0000 UTC m=+139.549863508" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.623876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" event={"ID":"e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e","Type":"ContainerStarted","Data":"4179380fb031511ff2ac46170deef9453bcb8dee11594612e7e1d7bb18749a7b"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.642206 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.643173 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.143153029 +0000 UTC m=+140.069283889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.643570 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" event={"ID":"b4c39836-a956-4dac-a063-a9a8aadfff84","Type":"ContainerStarted","Data":"3c718f34a26c0c77765e8efae1ba8637f2a4c9e352bac231d7ffe9ba90c91e0f"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.643738 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.678254 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" event={"ID":"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011","Type":"ContainerStarted","Data":"e945d5fe5da594bb57f2317e4991f5cf9248e252f6e2fa63e5b19db680fe757c"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.710962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" event={"ID":"fb17a243-ae06-4606-836d-52a1a620bfe6","Type":"ContainerStarted","Data":"7345e427bc1aa9b34d46b63cfc22a6cb020a6914ed23799021411417e30b7fcb"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.732927 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tnrwq" podStartSLOduration=120.732913192 podStartE2EDuration="2m0.732913192s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.655313297 +0000 UTC m=+139.581444157" watchObservedRunningTime="2026-01-21 17:18:58.732913192 +0000 UTC m=+139.659044052" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.734024 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.736804 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz"] Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.743688 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.744281 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.24426566 +0000 UTC m=+140.170396520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.750135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" event={"ID":"eff795e9-5c79-4605-8065-56c14471445f","Type":"ContainerStarted","Data":"6aac536f691681ca28cdaa51b968b31b131ec79c20b04b04ba5e5c44e15e303c"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.768183 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rmxv5" event={"ID":"52c8e75a-d225-4a9a-85a4-783998a290df","Type":"ContainerStarted","Data":"895564e1b8c8fc92be2ef10e5c2039c0ded3f8379f47fab6760e318c18fdee8b"} Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.768912 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-jmjt6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.768960 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jmjt6" podUID="c474281f-3344-4846-9dd2-b78f8c3b7145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.770029 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.788256 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.845280 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.845677 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.345659267 +0000 UTC m=+140.271790137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.877014 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" podStartSLOduration=120.876998431 podStartE2EDuration="2m0.876998431s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:18:58.777579103 +0000 UTC m=+139.703709973" watchObservedRunningTime="2026-01-21 17:18:58.876998431 +0000 UTC m=+139.803129291" Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.947833 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:58 crc kubenswrapper[4823]: E0121 17:18:58.949460 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.449446105 +0000 UTC m=+140.375576965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.963655 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 17:13:57 +0000 UTC, rotation deadline is 2026-11-04 06:56:49.348568371 +0000 UTC Jan 21 17:18:58 crc kubenswrapper[4823]: I0121 17:18:58.963688 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6877h37m50.384884745s for next certificate rotation Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.055828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.056196 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.556177578 +0000 UTC m=+140.482308438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.157351 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.158150 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.65813811 +0000 UTC m=+140.584268970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.261735 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.262382 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.762362179 +0000 UTC m=+140.688493039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.366154 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.366626 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.866613808 +0000 UTC m=+140.792744668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.452391 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qdbwc" Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.467249 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.467590 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:18:59.967572335 +0000 UTC m=+140.893703205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.568916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.569758 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.069727862 +0000 UTC m=+140.995858722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.679377 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.679804 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.179787238 +0000 UTC m=+141.105918098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.780226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.780523 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.280508259 +0000 UTC m=+141.206639129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.832344 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4qqzs" event={"ID":"df1c9fc3-bc84-4d96-b34e-c059587cffc7","Type":"ContainerStarted","Data":"8d4dd38267f013f49f9a180b73a48f659c6bd255582e5c5d14f6e9064ee25955"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.878519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" event={"ID":"296482bb-604f-44b4-be2f-2dab346045ce","Type":"ContainerStarted","Data":"9c82af05b00787d96c3904bb8d1363966b7b7296e0eca79f261cf77bd09d9a75"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.878573 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" event={"ID":"296482bb-604f-44b4-be2f-2dab346045ce","Type":"ContainerStarted","Data":"24638bbd106304dd0c57ed650ce7397c39e64d664a42af2c34260d6f926872bd"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.883641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.884909 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.384890462 +0000 UTC m=+141.311021322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.902641 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" event={"ID":"cb80c5ee-bf4c-4eea-be92-0852730ef914","Type":"ContainerStarted","Data":"a1b01f44e42b9a9af5ec06b2d0f46aac3123e2266f71feb59d9301956cb65917"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.908490 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" event={"ID":"fc7f0194-5611-45d4-b127-1dafa0f1fe76","Type":"ContainerStarted","Data":"07a0e5c56b1c302c57678da647b49a443631cfc5d9e77a50c635a2170b5abb08"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.962686 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" event={"ID":"eff795e9-5c79-4605-8065-56c14471445f","Type":"ContainerStarted","Data":"59b3c20286749fb7c54dded19f8f06783bb6f483a46430ba6c63d3ff042b64ee"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.979702 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" event={"ID":"450e2dbe-320e-45fa-8122-26b905dfb601","Type":"ContainerStarted","Data":"bd3d106b1b433ca734932646aded22ca872c0d4283dc1a68fb98c52258f5b25a"} Jan 21 17:18:59 crc kubenswrapper[4823]: I0121 17:18:59.985814 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:18:59 crc kubenswrapper[4823]: E0121 17:18:59.986271 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.486255418 +0000 UTC m=+141.412386278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.031995 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k2ppk" podStartSLOduration=122.031978906 podStartE2EDuration="2m2.031978906s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.031887884 +0000 UTC m=+140.958018754" watchObservedRunningTime="2026-01-21 17:19:00.031978906 +0000 UTC m=+140.958109766" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.060547 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4qqzs" podStartSLOduration=7.060531059 podStartE2EDuration="7.060531059s" podCreationTimestamp="2026-01-21 17:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.059668187 +0000 UTC m=+140.985799047" watchObservedRunningTime="2026-01-21 17:19:00.060531059 +0000 UTC m=+140.986661919" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.079828 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" event={"ID":"168ed525-dbb5-4d4b-967f-0fadb7a7f53f","Type":"ContainerStarted","Data":"76604c94d87a3581f17521eb8b781bb91d6e4e21627ff5cda45530aa77a9c206"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.087992 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.094590 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.594571761 +0000 UTC m=+141.520702621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.098848 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" event={"ID":"7a64c737-acdd-4653-ad52-bcf69b3b69f8","Type":"ContainerStarted","Data":"1171e04e71f60ee3b708c602a5bbf5bd0291f0d6d865a1ef662f9fd5b18de428"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.115477 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" event={"ID":"0e2b4cce-27bb-496d-8766-7724e90ab8ca","Type":"ContainerStarted","Data":"06b6708023fbed4196a18b27bce99e8c4f862613f03e808262292f1c5e6ecfe2"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.190887 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.192516 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.692494511 +0000 UTC m=+141.618625371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.231217 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wrc7m" podStartSLOduration=122.231192391 podStartE2EDuration="2m2.231192391s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.130141212 +0000 UTC m=+141.056272072" watchObservedRunningTime="2026-01-21 17:19:00.231192391 +0000 UTC m=+141.157323251" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.232253 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vchb4" podStartSLOduration=123.232244267 podStartE2EDuration="2m3.232244267s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.203455098 +0000 UTC m=+141.129585968" watchObservedRunningTime="2026-01-21 17:19:00.232244267 +0000 UTC m=+141.158375127" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.233543 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" event={"ID":"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b","Type":"ContainerStarted","Data":"51dc343425698023032942538c0a6864df6ae92e608eb38beefff199c68bd48d"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.252862 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghcjn" podStartSLOduration=122.252815658 podStartE2EDuration="2m2.252815658s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.23316368 +0000 UTC m=+141.159294540" watchObservedRunningTime="2026-01-21 17:19:00.252815658 +0000 UTC m=+141.178946528" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.292024 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" event={"ID":"7f3cfefd-b182-4468-a496-dd3c08e72508","Type":"ContainerStarted","Data":"77d7addcf2da442a99036c6d4c8c86f5af8da120e6fe50edc585bbca3ea118c4"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.292511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.293283 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.793267572 +0000 UTC m=+141.719398432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.296428 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" event={"ID":"7853d878-1dc8-4986-b9c8-e857f14a3230","Type":"ContainerStarted","Data":"71d5ee877faff7c1dfe7d21eb052cba2c932ef417a53242c9a8a0cebd74d565e"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.325194 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" podStartSLOduration=122.32517635 podStartE2EDuration="2m2.32517635s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.27617627 +0000 UTC m=+141.202307130" watchObservedRunningTime="2026-01-21 17:19:00.32517635 +0000 UTC m=+141.251307210" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.328422 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" podStartSLOduration=122.328411362 podStartE2EDuration="2m2.328411362s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.327324295 +0000 UTC m=+141.253455155" watchObservedRunningTime="2026-01-21 17:19:00.328411362 +0000 UTC m=+141.254542222" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.333595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" event={"ID":"3536b7cb-5def-4468-9282-897a30251cd4","Type":"ContainerStarted","Data":"55cd03df063f3515d2ace0810d842de7d4d63ab73aca2603e00209a4f1bba3f2"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.333726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" event={"ID":"3536b7cb-5def-4468-9282-897a30251cd4","Type":"ContainerStarted","Data":"6e6122eef603dddf6358f83a0e4e0911a0f6bfe741831d282fa243a293e710ba"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.334745 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.350468 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.359191 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" event={"ID":"fc2d3ecf-216f-4a71-9a16-ff6d7e0e7011","Type":"ContainerStarted","Data":"a9a15799f91eb36b9bcad735f71af7ed1c835001b8e87c237828113693665e17"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.368147 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" event={"ID":"29fae590-dcbb-4d4c-8941-ad734c10dfef","Type":"ContainerStarted","Data":"9195e3ff4bb5e624196cc879be1ae04405e9ed4d41d8df16d63a28c8f5323890"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.375712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" event={"ID":"c9274abf-35a8-4c79-9995-142413907ffc","Type":"ContainerStarted","Data":"c27bd07bf011fb269f673c07903f12e3ccda13262b2a31c78d31c4287f3cb830"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.394201 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.394887 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.894871295 +0000 UTC m=+141.821002155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.412758 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" podStartSLOduration=122.412741528 podStartE2EDuration="2m2.412741528s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.412122572 +0000 UTC m=+141.338253432" watchObservedRunningTime="2026-01-21 17:19:00.412741528 +0000 UTC m=+141.338872388" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.414098 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" event={"ID":"e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e","Type":"ContainerStarted","Data":"c8651351e4f284b85c1fe36b2dbd37c58ed33e0a4d9c1afe05e31b9292b367eb"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.462526 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10" containerID="c9cf74ef9ab7dee12f9e9159d4e8e37edd2d02bbf84672d748c1d5152a75c2b9" exitCode=0 Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.462614 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5g629" event={"ID":"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10","Type":"ContainerDied","Data":"c9cf74ef9ab7dee12f9e9159d4e8e37edd2d02bbf84672d748c1d5152a75c2b9"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.463670 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" podStartSLOduration=122.463653887 podStartE2EDuration="2m2.463653887s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.463131964 +0000 UTC m=+141.389262824" watchObservedRunningTime="2026-01-21 17:19:00.463653887 +0000 UTC m=+141.389784757" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.496168 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.496277 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.996257842 +0000 UTC m=+141.922388702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.497537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.497948 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:00.997931515 +0000 UTC m=+141.924062375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.513344 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" event={"ID":"2e12cd6e-3d66-46f1-8fcc-529d11c2ac2c","Type":"ContainerStarted","Data":"75901ab5928aa0ed27170156a0b00b47d68060fddadeb4e780cf04eb742c1fc0"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.558558 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" event={"ID":"2e548be0-d2ab-4bad-a18c-cbe203dbb314","Type":"ContainerStarted","Data":"35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.561906 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.563021 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vw29d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.563075 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" podUID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.588284 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" event={"ID":"1438d22a-c19a-427d-b1b5-02cbf2675461","Type":"ContainerStarted","Data":"a4499ac3dd8e39bd1e2ffb8f069bdd66a42c92afa683c172ab519700c192828a"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.589209 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.603547 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k6v4k" podStartSLOduration=122.603526249 podStartE2EDuration="2m2.603526249s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.544765531 +0000 UTC m=+141.470896401" watchObservedRunningTime="2026-01-21 17:19:00.603526249 +0000 UTC m=+141.529657109" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.604125 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.605132 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.105087878 +0000 UTC m=+142.031218748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.608023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.608266 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.609262 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.109249213 +0000 UTC m=+142.035380163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.630260 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vttrq" event={"ID":"18f96219-7979-4162-8a45-7439bdb10075","Type":"ContainerStarted","Data":"b75776f4b2fc4092ed481b4cdeaf48112dd11692a9f7369587753385a85a97c5"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.645010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nxscj" event={"ID":"096622f1-acd1-42ea-af9c-d43158bccf6c","Type":"ContainerStarted","Data":"8c69107cfd0c39a3276de5c51442c03d6cff35dd774337bb97b5be73e989f04e"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.645065 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nxscj" event={"ID":"096622f1-acd1-42ea-af9c-d43158bccf6c","Type":"ContainerStarted","Data":"6182a58569feea571511955a33b9f555561082deacda9c8876780f7a3d154b77"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.661354 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z5hx4" event={"ID":"fb17a243-ae06-4606-836d-52a1a620bfe6","Type":"ContainerStarted","Data":"99029573554d2359b55bcece14f54c37ea1b53f6d02bbab146f58e4d41e15608"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.683100 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhlfg" podStartSLOduration=122.683081932 podStartE2EDuration="2m2.683081932s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.603435786 +0000 UTC m=+141.529566646" watchObservedRunningTime="2026-01-21 17:19:00.683081932 +0000 UTC m=+141.609212792" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.713910 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.714949 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.214934389 +0000 UTC m=+142.141065249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.732030 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" event={"ID":"b4c39836-a956-4dac-a063-a9a8aadfff84","Type":"ContainerStarted","Data":"38294e6bafe516556335794091042329ddff680f5a8e5828486a439358c0db54"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.747868 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vttrq" podStartSLOduration=122.747829741 podStartE2EDuration="2m2.747829741s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.712749463 +0000 UTC m=+141.638880323" watchObservedRunningTime="2026-01-21 17:19:00.747829741 +0000 UTC m=+141.673960601" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.778184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" event={"ID":"481b8384-d11b-4bcb-9705-00065afa020f","Type":"ContainerStarted","Data":"5b797407ea468c5f0b5d7cbde6e3c3350f2ab4ab8d1484ed6eefdfcdc6c179a1"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.778231 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" event={"ID":"481b8384-d11b-4bcb-9705-00065afa020f","Type":"ContainerStarted","Data":"b26c564892f9fbde771ec49bd277889f8860fcf7ac2695ade65175c066a4c0d9"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.787092 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z797g" podStartSLOduration=122.787077115 podStartE2EDuration="2m2.787077115s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.750205872 +0000 UTC m=+141.676336742" watchObservedRunningTime="2026-01-21 17:19:00.787077115 +0000 UTC m=+141.713207975" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.805209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rmxv5" event={"ID":"52c8e75a-d225-4a9a-85a4-783998a290df","Type":"ContainerStarted","Data":"ca02b45e3707ec8ab687113c3eb2f9d758d9953ea5d8355fb873531ca038abdd"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.813932 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" podStartSLOduration=122.813917455 podStartE2EDuration="2m2.813917455s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.813369921 +0000 UTC m=+141.739500781" watchObservedRunningTime="2026-01-21 17:19:00.813917455 +0000 UTC m=+141.740048315" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.815112 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.815434 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.315422593 +0000 UTC m=+142.241553453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.829243 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" event={"ID":"fa305dd8-20a7-4adb-9c38-1f4cee672164","Type":"ContainerStarted","Data":"2874941c846fc830420f0d36aa4f87c93b454bfab150105c38858f2af4c4a362"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.830397 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.858842 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" podStartSLOduration=122.858815502 podStartE2EDuration="2m2.858815502s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.848231044 +0000 UTC m=+141.774361904" watchObservedRunningTime="2026-01-21 17:19:00.858815502 +0000 UTC m=+141.784946362" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.860952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" event={"ID":"423ca236-207d-44cf-91f5-bdafb1c778a5","Type":"ContainerStarted","Data":"7757ff58efefcdc322ee85bf649d9e8bac3b0dd7ee567ab897b2bfa8e1032bbd"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.861002 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" event={"ID":"423ca236-207d-44cf-91f5-bdafb1c778a5","Type":"ContainerStarted","Data":"28d5c1404346d683f750e4e45913bca83cd76f444209d65e95a3539e4bfca094"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.868662 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" event={"ID":"b61875bd-8183-49ad-a085-7478b5b97ca8","Type":"ContainerStarted","Data":"06c0f3c72b2f2e10e9b7be0c65a32a2fa965ac64c274a624802806917386ec93"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.869083 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.904823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" event={"ID":"37f78636-4c9d-48e0-869d-dbfd7a83eace","Type":"ContainerStarted","Data":"71a44c4e5109f262d41e2c451931ab051f5cb0321ed65cad69491a314e9bbb61"} Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.906652 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-jmjt6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.906700 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jmjt6" podUID="c474281f-3344-4846-9dd2-b78f8c3b7145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.916414 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:00 crc kubenswrapper[4823]: E0121 17:19:00.919768 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.419748015 +0000 UTC m=+142.345878875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.927540 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-q5pd7" podStartSLOduration=122.927522932 podStartE2EDuration="2m2.927522932s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.92627879 +0000 UTC m=+141.852409650" watchObservedRunningTime="2026-01-21 17:19:00.927522932 +0000 UTC m=+141.853653792" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.929060 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" podStartSLOduration=122.92905194 podStartE2EDuration="2m2.92905194s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.896367133 +0000 UTC m=+141.822497993" watchObservedRunningTime="2026-01-21 17:19:00.92905194 +0000 UTC m=+141.855182800" Jan 21 17:19:00 crc kubenswrapper[4823]: I0121 17:19:00.962492 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jmn4h" podStartSLOduration=122.962470486 podStartE2EDuration="2m2.962470486s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:00.961076811 +0000 UTC m=+141.887207681" watchObservedRunningTime="2026-01-21 17:19:00.962470486 +0000 UTC m=+141.888601346" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.008354 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" podStartSLOduration=124.008332988 podStartE2EDuration="2m4.008332988s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.004970833 +0000 UTC m=+141.931101693" watchObservedRunningTime="2026-01-21 17:19:01.008332988 +0000 UTC m=+141.934463848" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.024441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.025960 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.525940674 +0000 UTC m=+142.452071534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.086990 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chzvq" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.103406 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" podStartSLOduration=123.103389815 podStartE2EDuration="2m3.103389815s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.060455898 +0000 UTC m=+141.986586758" watchObservedRunningTime="2026-01-21 17:19:01.103389815 +0000 UTC m=+142.029520675" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.131393 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.131702 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.631686921 +0000 UTC m=+142.557817781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.132888 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wzpd6" podStartSLOduration=123.132877301 podStartE2EDuration="2m3.132877301s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.131160508 +0000 UTC m=+142.057291368" watchObservedRunningTime="2026-01-21 17:19:01.132877301 +0000 UTC m=+142.059008161" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.134325 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rmxv5" podStartSLOduration=8.134316018 podStartE2EDuration="8.134316018s" podCreationTimestamp="2026-01-21 17:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.104413321 +0000 UTC m=+142.030544181" watchObservedRunningTime="2026-01-21 17:19:01.134316018 +0000 UTC m=+142.060446878" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.209441 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" podStartSLOduration=123.20942562 podStartE2EDuration="2m3.20942562s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.16243437 +0000 UTC m=+142.088565230" watchObservedRunningTime="2026-01-21 17:19:01.20942562 +0000 UTC m=+142.135556480" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.241889 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.242480 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.742461076 +0000 UTC m=+142.668591926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.335254 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wml2h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.335321 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" podUID="3536b7cb-5def-4468-9282-897a30251cd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.343491 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.343842 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.843829803 +0000 UTC m=+142.769960653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.372738 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.445019 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.445295 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:01.945284092 +0000 UTC m=+142.871414952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.449417 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.453958 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:01 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:01 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:01 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.454023 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.546490 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.546634 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.046611628 +0000 UTC m=+142.972742488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.546681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.547003 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.046995847 +0000 UTC m=+142.973126707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.660159 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.660371 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.160341537 +0000 UTC m=+143.086472397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.660795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.661068 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.161056325 +0000 UTC m=+143.087187185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.762088 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.762341 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.262309359 +0000 UTC m=+143.188440219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.762513 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.762803 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.262788411 +0000 UTC m=+143.188919271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.863496 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.863697 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.363669336 +0000 UTC m=+143.289800196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.863863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.864191 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.364176729 +0000 UTC m=+143.290307589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.915660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2c287" event={"ID":"29fae590-dcbb-4d4c-8941-ad734c10dfef","Type":"ContainerStarted","Data":"7da1393de55ccafd8a0493281f7c6d25a3e8fe4ea2591caf85ad6e82774cca1e"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.917008 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qsplv" event={"ID":"1438d22a-c19a-427d-b1b5-02cbf2675461","Type":"ContainerStarted","Data":"19096afb2cb400aa990587be9b62d81546f5a93a7fdcec61ff381cabfbd7915c"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.918886 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nxscj" event={"ID":"096622f1-acd1-42ea-af9c-d43158bccf6c","Type":"ContainerStarted","Data":"37fd333ae45a02a12002ab4d7f91d097abc208d5e72770e5369c7cd7ed0fddfe"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.919014 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nxscj" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.921044 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5g629" event={"ID":"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10","Type":"ContainerStarted","Data":"f9dc0642cf874712da47fbe32bdefb2934d1b9d99008d85c18a6daae0f3d5312"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.921078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5g629" event={"ID":"ab2d5770-8ba0-4dc7-bdc0-f217cbd4da10","Type":"ContainerStarted","Data":"d43b34e05a2a06a1bcb06db51b9b93377620a6cf08ce5139eea608b00f035557"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.923543 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kztl2" event={"ID":"0e2b4cce-27bb-496d-8766-7724e90ab8ca","Type":"ContainerStarted","Data":"56ada6a4b17980221fe97f576d3e105bcce6c0d975ed9ede1a408531824dd494"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.925389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" event={"ID":"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b","Type":"ContainerStarted","Data":"f5915d266cedf196a29d9154a0494cfaf2338db68fb47e0da3293cf7f3e0b00f"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.925415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" event={"ID":"c6cdf8e9-64f0-4e78-99cb-94f0affbd11b","Type":"ContainerStarted","Data":"ac0f8f46850ab23b73d2caba432537dd9b774b7b2fcbebbc9bbc6d6c3ef0331f"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.927032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wlp2h" event={"ID":"e2f88bc8-f1e5-4c78-886a-0a0d2c2e056e","Type":"ContainerStarted","Data":"ab7f64c40a9f6c215cdba6695c9b8353ce6e17cc553e31bebe765d409c8d13b4"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.928651 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" event={"ID":"296482bb-604f-44b4-be2f-2dab346045ce","Type":"ContainerStarted","Data":"4da6a0b12d788730c3ddcecac42e4f6dfb10b01d6f835b0a9ad4520b52b6359d"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.929004 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.935370 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nxscj" podStartSLOduration=8.935356471 podStartE2EDuration="8.935356471s" podCreationTimestamp="2026-01-21 17:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.93253079 +0000 UTC m=+142.858661650" watchObservedRunningTime="2026-01-21 17:19:01.935356471 +0000 UTC m=+142.861487331" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.937696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mc68" event={"ID":"37f78636-4c9d-48e0-869d-dbfd7a83eace","Type":"ContainerStarted","Data":"0facd9d836798a2f34d4425594c0e2de4958dd7d86583b1a577279f7472970e0"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.939872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" event={"ID":"7e7a4213-8205-498b-8390-506f5f273557","Type":"ContainerStarted","Data":"9a02d27b40a51e4f48e54f49025c739ad0f13c5ccdf9e2245694656c079cae02"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.939911 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" event={"ID":"7e7a4213-8205-498b-8390-506f5f273557","Type":"ContainerStarted","Data":"0c39c4d9796e2a08f40fe00e7af11fe532c70ebf3592c3cf03167603b6761db7"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.941634 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" event={"ID":"7f3cfefd-b182-4468-a496-dd3c08e72508","Type":"ContainerStarted","Data":"496dddea88f841ba3a2116ec802d23bf7c602234eec6fd6a4823718490d9f0aa"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.943018 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" event={"ID":"450e2dbe-320e-45fa-8122-26b905dfb601","Type":"ContainerStarted","Data":"9d28086d8468045fb3b611d6356708273715a871ced198e81c48672383a1e4d7"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.944406 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vzq92" event={"ID":"168ed525-dbb5-4d4b-967f-0fadb7a7f53f","Type":"ContainerStarted","Data":"65ada3a9bf1cd560d02520c0cb6510ef3c182e3ab30a424a75a4d328276590ae"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.946823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" event={"ID":"72b0a364-8275-4bc5-bfcf-744d1caf430e","Type":"ContainerStarted","Data":"6e740d62921de1124263be9462d38150ae621c60d0081ff3705266f028d144b0"} Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.952556 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5g629" podStartSLOduration=123.952543726 podStartE2EDuration="2m3.952543726s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.951311625 +0000 UTC m=+142.877442485" watchObservedRunningTime="2026-01-21 17:19:01.952543726 +0000 UTC m=+142.878674586" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.954459 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.956222 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.964586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.964683 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.464651293 +0000 UTC m=+143.390782163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.966474 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:01 crc kubenswrapper[4823]: E0121 17:19:01.968069 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.468054129 +0000 UTC m=+143.394184989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:01 crc kubenswrapper[4823]: I0121 17:19:01.988351 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" podStartSLOduration=123.988329472 podStartE2EDuration="2m3.988329472s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.973054386 +0000 UTC m=+142.899185246" watchObservedRunningTime="2026-01-21 17:19:01.988329472 +0000 UTC m=+142.914460332" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.014579 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" podStartSLOduration=124.014563997 podStartE2EDuration="2m4.014563997s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:02.013628003 +0000 UTC m=+142.939758863" watchObservedRunningTime="2026-01-21 17:19:02.014563997 +0000 UTC m=+142.940694857" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.014796 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6rf" podStartSLOduration=124.014792882 podStartE2EDuration="2m4.014792882s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:01.991578775 +0000 UTC m=+142.917709635" watchObservedRunningTime="2026-01-21 17:19:02.014792882 +0000 UTC m=+142.940923742" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.029297 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" podStartSLOduration=125.029280109 podStartE2EDuration="2m5.029280109s" podCreationTimestamp="2026-01-21 17:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:02.025313639 +0000 UTC m=+142.951444499" watchObservedRunningTime="2026-01-21 17:19:02.029280109 +0000 UTC m=+142.955410969" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.068868 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.069263 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.569246511 +0000 UTC m=+143.495377371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.081704 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dc99l" podStartSLOduration=124.081686636 podStartE2EDuration="2m4.081686636s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:02.06366229 +0000 UTC m=+142.989793150" watchObservedRunningTime="2026-01-21 17:19:02.081686636 +0000 UTC m=+143.007817496" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.170966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.171393 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.671371717 +0000 UTC m=+143.597502657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.241684 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pg9m"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.242895 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.248591 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.272188 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.272431 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.772399345 +0000 UTC m=+143.698530205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.272485 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrfh\" (UniqueName: \"kubernetes.io/projected/035531d0-ecfd-4d31-be47-08fc49762b7e-kube-api-access-pbrfh\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.272739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-utilities\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.272807 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.272886 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-catalog-content\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.273122 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.773106903 +0000 UTC m=+143.699237823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.282236 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pg9m"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.376618 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.377131 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.877110757 +0000 UTC m=+143.803241627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.377169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.377236 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-catalog-content\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.377278 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrfh\" (UniqueName: \"kubernetes.io/projected/035531d0-ecfd-4d31-be47-08fc49762b7e-kube-api-access-pbrfh\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.377411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-utilities\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.378728 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-utilities\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.379342 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.879326403 +0000 UTC m=+143.805457263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.379434 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-catalog-content\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.407129 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrfh\" (UniqueName: \"kubernetes.io/projected/035531d0-ecfd-4d31-be47-08fc49762b7e-kube-api-access-pbrfh\") pod \"certified-operators-9pg9m\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.415709 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pqnps"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.434197 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.437342 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.453916 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:02 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:02 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:02 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.453968 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.479385 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.479958 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-utilities\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.480012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pzg\" (UniqueName: \"kubernetes.io/projected/8edc9d3c-22fb-492b-8f1a-c488667e0df0-kube-api-access-95pzg\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.480128 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-catalog-content\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.480317 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:02.98029908 +0000 UTC m=+143.906429930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.489931 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqnps"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.538414 4823 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.556539 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.581675 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.581757 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-utilities\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.581794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pzg\" (UniqueName: \"kubernetes.io/projected/8edc9d3c-22fb-492b-8f1a-c488667e0df0-kube-api-access-95pzg\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.581876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-catalog-content\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.582111 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:03.082089857 +0000 UTC m=+144.008220777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.582437 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-catalog-content\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.582722 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-utilities\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.613630 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pzg\" (UniqueName: \"kubernetes.io/projected/8edc9d3c-22fb-492b-8f1a-c488667e0df0-kube-api-access-95pzg\") pod \"community-operators-pqnps\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.636148 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kcnnz"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.637137 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.649680 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcnnz"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.682884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.683154 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-catalog-content\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.683241 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnxn\" (UniqueName: \"kubernetes.io/projected/fef83eca-df2a-4e24-80fe-b8beb1b192c6-kube-api-access-jbnxn\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.683273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-utilities\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.683415 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:03.183394612 +0000 UTC m=+144.109525472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.759872 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.784884 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnxn\" (UniqueName: \"kubernetes.io/projected/fef83eca-df2a-4e24-80fe-b8beb1b192c6-kube-api-access-jbnxn\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.785133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-utilities\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.785157 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.785214 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-catalog-content\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.785598 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-catalog-content\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.786450 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-utilities\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.786669 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 17:19:03.286659787 +0000 UTC m=+144.212790647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nrwt6" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.809125 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlnv2"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.810019 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.827060 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlnv2"] Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.836073 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnxn\" (UniqueName: \"kubernetes.io/projected/fef83eca-df2a-4e24-80fe-b8beb1b192c6-kube-api-access-jbnxn\") pod \"certified-operators-kcnnz\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.885567 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.885786 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcsbn\" (UniqueName: \"kubernetes.io/projected/989e8ecd-3950-4494-b9af-911eeeed065c-kube-api-access-mcsbn\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.885833 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-utilities\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.885865 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-catalog-content\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: E0121 17:19:02.886010 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 17:19:03.385996762 +0000 UTC m=+144.312127622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.888167 4823 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T17:19:02.538448412Z","Handler":null,"Name":""} Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.907108 4823 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.907149 4823 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.922505 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pg9m"] Jan 21 17:19:02 crc kubenswrapper[4823]: W0121 17:19:02.931968 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod035531d0_ecfd_4d31_be47_08fc49762b7e.slice/crio-4cc62104a28be409eeb2979c75f6fc2e43574b7363ea00829aea02fdf5f08bb1 WatchSource:0}: Error finding container 4cc62104a28be409eeb2979c75f6fc2e43574b7363ea00829aea02fdf5f08bb1: Status 404 returned error can't find the container with id 4cc62104a28be409eeb2979c75f6fc2e43574b7363ea00829aea02fdf5f08bb1 Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.966920 4823 generic.go:334] "Generic (PLEG): container finished" podID="450e2dbe-320e-45fa-8122-26b905dfb601" containerID="9d28086d8468045fb3b611d6356708273715a871ced198e81c48672383a1e4d7" exitCode=0 Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.967187 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" event={"ID":"450e2dbe-320e-45fa-8122-26b905dfb601","Type":"ContainerDied","Data":"9d28086d8468045fb3b611d6356708273715a871ced198e81c48672383a1e4d7"} Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.986996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-utilities\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.987034 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-catalog-content\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.987109 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.987160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcsbn\" (UniqueName: \"kubernetes.io/projected/989e8ecd-3950-4494-b9af-911eeeed065c-kube-api-access-mcsbn\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.987785 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-utilities\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.988008 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-catalog-content\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.989271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pg9m" event={"ID":"035531d0-ecfd-4d31-be47-08fc49762b7e","Type":"ContainerStarted","Data":"4cc62104a28be409eeb2979c75f6fc2e43574b7363ea00829aea02fdf5f08bb1"} Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.995669 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:19:02 crc kubenswrapper[4823]: I0121 17:19:02.995714 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.000271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" event={"ID":"7e7a4213-8205-498b-8390-506f5f273557","Type":"ContainerStarted","Data":"32558603daa1708e8e2fe7865e97a2f5c0f6ced37244e5f473628f78591f4148"} Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.000310 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" event={"ID":"7e7a4213-8205-498b-8390-506f5f273557","Type":"ContainerStarted","Data":"e33c7e5f5a00646f28c1478647edfc3c2e793cba61d1c5b936727218ac5a22d9"} Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.041952 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcsbn\" (UniqueName: \"kubernetes.io/projected/989e8ecd-3950-4494-b9af-911eeeed065c-kube-api-access-mcsbn\") pod \"community-operators-zlnv2\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.043469 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.063519 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n4ttw" podStartSLOduration=10.063500017 podStartE2EDuration="10.063500017s" podCreationTimestamp="2026-01-21 17:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:03.063325232 +0000 UTC m=+143.989456092" watchObservedRunningTime="2026-01-21 17:19:03.063500017 +0000 UTC m=+143.989630877" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.142236 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqnps"] Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.144589 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nrwt6\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.178843 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.190684 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.309969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.363618 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.390544 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.457070 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:03 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:03 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:03 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.457137 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.597511 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlnv2"] Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.726839 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nrwt6"] Jan 21 17:19:03 crc kubenswrapper[4823]: W0121 17:19:03.739563 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ca3409_f1c5_41e7_aabe_3382b23fd48c.slice/crio-20eee32bf2304c9f40fec3b7f73f38874598d01e1cf8f4acd773c2d981a493b7 WatchSource:0}: Error finding container 20eee32bf2304c9f40fec3b7f73f38874598d01e1cf8f4acd773c2d981a493b7: Status 404 returned error can't find the container with id 20eee32bf2304c9f40fec3b7f73f38874598d01e1cf8f4acd773c2d981a493b7 Jan 21 17:19:03 crc kubenswrapper[4823]: I0121 17:19:03.742290 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcnnz"] Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.005536 4823 generic.go:334] "Generic (PLEG): container finished" podID="989e8ecd-3950-4494-b9af-911eeeed065c" containerID="509b02b98a0435a4d9901cb18edd3472350a3d2055fab56c1e8c006ecc296a10" exitCode=0 Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.005631 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnv2" event={"ID":"989e8ecd-3950-4494-b9af-911eeeed065c","Type":"ContainerDied","Data":"509b02b98a0435a4d9901cb18edd3472350a3d2055fab56c1e8c006ecc296a10"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.005906 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnv2" event={"ID":"989e8ecd-3950-4494-b9af-911eeeed065c","Type":"ContainerStarted","Data":"ec417db2d87bf3505e5a60c44fd643f94720a4e1ab5fa864d63de9fad2fe7386"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.017356 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.019325 4823 generic.go:334] "Generic (PLEG): container finished" podID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerID="484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0" exitCode=0 Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.019460 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pg9m" event={"ID":"035531d0-ecfd-4d31-be47-08fc49762b7e","Type":"ContainerDied","Data":"484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.021945 4823 generic.go:334] "Generic (PLEG): container finished" podID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerID="7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a" exitCode=0 Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.022016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerDied","Data":"7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.022043 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerStarted","Data":"2e116cece061da101789abab2ef2c579ca32918017ee51583762f604d75014e7"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.025311 4823 generic.go:334] "Generic (PLEG): container finished" podID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerID="d6f013544d7b425a6fb5b86371be81ec25b6ecb9dd921bb82f090565c14055b2" exitCode=0 Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.025384 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnnz" event={"ID":"fef83eca-df2a-4e24-80fe-b8beb1b192c6","Type":"ContainerDied","Data":"d6f013544d7b425a6fb5b86371be81ec25b6ecb9dd921bb82f090565c14055b2"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.025411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnnz" event={"ID":"fef83eca-df2a-4e24-80fe-b8beb1b192c6","Type":"ContainerStarted","Data":"144823a512a7a1f9ee148da6964d35434148b3bcfebae3fb7f1fccac32ff9175"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.030944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" event={"ID":"49ca3409-f1c5-41e7-aabe-3382b23fd48c","Type":"ContainerStarted","Data":"a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.030976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" event={"ID":"49ca3409-f1c5-41e7-aabe-3382b23fd48c","Type":"ContainerStarted","Data":"20eee32bf2304c9f40fec3b7f73f38874598d01e1cf8f4acd773c2d981a493b7"} Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.030989 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.053273 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" podStartSLOduration=126.053252409 podStartE2EDuration="2m6.053252409s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:04.051676529 +0000 UTC m=+144.977807389" watchObservedRunningTime="2026-01-21 17:19:04.053252409 +0000 UTC m=+144.979383269" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.257869 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.404182 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5gw98"] Jan 21 17:19:04 crc kubenswrapper[4823]: E0121 17:19:04.404488 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450e2dbe-320e-45fa-8122-26b905dfb601" containerName="collect-profiles" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.404513 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="450e2dbe-320e-45fa-8122-26b905dfb601" containerName="collect-profiles" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.404699 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="450e2dbe-320e-45fa-8122-26b905dfb601" containerName="collect-profiles" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.406169 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.408374 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.409240 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450e2dbe-320e-45fa-8122-26b905dfb601-config-volume\") pod \"450e2dbe-320e-45fa-8122-26b905dfb601\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.409263 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gw98"] Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.409289 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450e2dbe-320e-45fa-8122-26b905dfb601-secret-volume\") pod \"450e2dbe-320e-45fa-8122-26b905dfb601\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.409377 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbm6r\" (UniqueName: \"kubernetes.io/projected/450e2dbe-320e-45fa-8122-26b905dfb601-kube-api-access-lbm6r\") pod \"450e2dbe-320e-45fa-8122-26b905dfb601\" (UID: \"450e2dbe-320e-45fa-8122-26b905dfb601\") " Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.410687 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450e2dbe-320e-45fa-8122-26b905dfb601-config-volume" (OuterVolumeSpecName: "config-volume") pod "450e2dbe-320e-45fa-8122-26b905dfb601" (UID: "450e2dbe-320e-45fa-8122-26b905dfb601"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.415469 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450e2dbe-320e-45fa-8122-26b905dfb601-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "450e2dbe-320e-45fa-8122-26b905dfb601" (UID: "450e2dbe-320e-45fa-8122-26b905dfb601"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.460170 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450e2dbe-320e-45fa-8122-26b905dfb601-kube-api-access-lbm6r" (OuterVolumeSpecName: "kube-api-access-lbm6r") pod "450e2dbe-320e-45fa-8122-26b905dfb601" (UID: "450e2dbe-320e-45fa-8122-26b905dfb601"). InnerVolumeSpecName "kube-api-access-lbm6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.468128 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:04 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:04 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:04 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.468192 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.510947 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-catalog-content\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.511028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-utilities\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.511108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psdjz\" (UniqueName: \"kubernetes.io/projected/b3361a52-0a28-4be5-8216-a324cbba0c60-kube-api-access-psdjz\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.511158 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450e2dbe-320e-45fa-8122-26b905dfb601-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.511172 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450e2dbe-320e-45fa-8122-26b905dfb601-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.511184 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbm6r\" (UniqueName: \"kubernetes.io/projected/450e2dbe-320e-45fa-8122-26b905dfb601-kube-api-access-lbm6r\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.612110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psdjz\" (UniqueName: \"kubernetes.io/projected/b3361a52-0a28-4be5-8216-a324cbba0c60-kube-api-access-psdjz\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.612204 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-catalog-content\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.612253 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-utilities\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.612787 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-utilities\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.612936 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-catalog-content\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.631181 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psdjz\" (UniqueName: \"kubernetes.io/projected/b3361a52-0a28-4be5-8216-a324cbba0c60-kube-api-access-psdjz\") pod \"redhat-marketplace-5gw98\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.771929 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.801001 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqjn"] Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.802230 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.817570 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqjn"] Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.916584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-catalog-content\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.916769 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-utilities\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:04 crc kubenswrapper[4823]: I0121 17:19:04.917087 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj985\" (UniqueName: \"kubernetes.io/projected/25390b7d-a426-47af-b0c5-dfa4c6a64667-kube-api-access-bj985\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.013783 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gw98"] Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.023907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-catalog-content\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.023976 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-utilities\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.024012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj985\" (UniqueName: \"kubernetes.io/projected/25390b7d-a426-47af-b0c5-dfa4c6a64667-kube-api-access-bj985\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.025360 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-catalog-content\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.025828 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-utilities\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.042639 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj985\" (UniqueName: \"kubernetes.io/projected/25390b7d-a426-47af-b0c5-dfa4c6a64667-kube-api-access-bj985\") pod \"redhat-marketplace-2hqjn\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.058894 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.059971 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.065272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" event={"ID":"450e2dbe-320e-45fa-8122-26b905dfb601","Type":"ContainerDied","Data":"bd3d106b1b433ca734932646aded22ca872c0d4283dc1a68fb98c52258f5b25a"} Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.065315 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd3d106b1b433ca734932646aded22ca872c0d4283dc1a68fb98c52258f5b25a" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.065390 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.066652 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.068522 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gw98" event={"ID":"b3361a52-0a28-4be5-8216-a324cbba0c60","Type":"ContainerStarted","Data":"daad94bc06fa9bd01e3ef6fd61b5a0b2a2bbfadb4ee5a4d7de3a53c29262d620"} Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.135660 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.136407 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.137413 4823 patch_prober.go:28] interesting pod/console-f9d7485db-4dj4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.137468 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4dj4t" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.139318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.145324 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-jmjt6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.145367 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jmjt6" podUID="c474281f-3344-4846-9dd2-b78f8c3b7145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.145428 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-jmjt6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.145442 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jmjt6" podUID="c474281f-3344-4846-9dd2-b78f8c3b7145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.413375 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tkfhr"] Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.418141 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.423591 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkfhr"] Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.425551 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.435873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.435944 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-catalog-content\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.435980 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-utilities\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.436034 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.436090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.436147 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.436220 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbmvf\" (UniqueName: \"kubernetes.io/projected/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-kube-api-access-hbmvf\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.449472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.450948 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.454910 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.455436 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:05 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:05 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:05 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.455476 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.457175 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.462766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.470417 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.476751 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.482337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqjn"] Jan 21 17:19:05 crc kubenswrapper[4823]: W0121 17:19:05.512317 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25390b7d_a426_47af_b0c5_dfa4c6a64667.slice/crio-0f4b72422a2b6c14e678a4f7fcb90133324ca584d218720450dc78023f34bb7c WatchSource:0}: Error finding container 0f4b72422a2b6c14e678a4f7fcb90133324ca584d218720450dc78023f34bb7c: Status 404 returned error can't find the container with id 0f4b72422a2b6c14e678a4f7fcb90133324ca584d218720450dc78023f34bb7c Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.537909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbmvf\" (UniqueName: \"kubernetes.io/projected/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-kube-api-access-hbmvf\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.538001 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-catalog-content\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.538055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-utilities\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.539375 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-catalog-content\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.539322 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-utilities\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.557023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbmvf\" (UniqueName: \"kubernetes.io/projected/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-kube-api-access-hbmvf\") pod \"redhat-operators-tkfhr\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.743107 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.802974 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hw2m8"] Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.803907 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.816792 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hw2m8"] Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.846943 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lhmt\" (UniqueName: \"kubernetes.io/projected/133279e8-0382-4c22-aee4-423701729b21-kube-api-access-8lhmt\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.847001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-catalog-content\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.847067 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-utilities\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: W0121 17:19:05.862952 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-2d342583cce55e533cd7285f4e318077ec2dcc3b20dfe5077d979fd1002fe1d2 WatchSource:0}: Error finding container 2d342583cce55e533cd7285f4e318077ec2dcc3b20dfe5077d979fd1002fe1d2: Status 404 returned error can't find the container with id 2d342583cce55e533cd7285f4e318077ec2dcc3b20dfe5077d979fd1002fe1d2 Jan 21 17:19:05 crc kubenswrapper[4823]: W0121 17:19:05.923496 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-2a3d506be0db315f3ffb3fe83027a54bb7413f653d17ef572bd65b4ede0cb822 WatchSource:0}: Error finding container 2a3d506be0db315f3ffb3fe83027a54bb7413f653d17ef572bd65b4ede0cb822: Status 404 returned error can't find the container with id 2a3d506be0db315f3ffb3fe83027a54bb7413f653d17ef572bd65b4ede0cb822 Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.948432 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lhmt\" (UniqueName: \"kubernetes.io/projected/133279e8-0382-4c22-aee4-423701729b21-kube-api-access-8lhmt\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.948508 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-catalog-content\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.948572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-utilities\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.949186 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-catalog-content\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.949690 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-utilities\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:05 crc kubenswrapper[4823]: I0121 17:19:05.971949 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lhmt\" (UniqueName: \"kubernetes.io/projected/133279e8-0382-4c22-aee4-423701729b21-kube-api-access-8lhmt\") pod \"redhat-operators-hw2m8\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:06 crc kubenswrapper[4823]: W0121 17:19:06.018655 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ce4889aedc9835360f4d741eada76d891829320993a5815068d8523053b7ea7f WatchSource:0}: Error finding container ce4889aedc9835360f4d741eada76d891829320993a5815068d8523053b7ea7f: Status 404 returned error can't find the container with id ce4889aedc9835360f4d741eada76d891829320993a5815068d8523053b7ea7f Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.063035 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.063076 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.070506 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.083625 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce4889aedc9835360f4d741eada76d891829320993a5815068d8523053b7ea7f"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.086673 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2a3d506be0db315f3ffb3fe83027a54bb7413f653d17ef572bd65b4ede0cb822"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.097502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"72072d3abf3b316c2f4b8cffe54453dbaf14cd86bafc2ef95f7cc1d168824962"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.097842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2d342583cce55e533cd7285f4e318077ec2dcc3b20dfe5077d979fd1002fe1d2"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.102700 4823 generic.go:334] "Generic (PLEG): container finished" podID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerID="d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711" exitCode=0 Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.102806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerDied","Data":"d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.102891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerStarted","Data":"0f4b72422a2b6c14e678a4f7fcb90133324ca584d218720450dc78023f34bb7c"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.105995 4823 generic.go:334] "Generic (PLEG): container finished" podID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerID="0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1" exitCode=0 Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.107485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gw98" event={"ID":"b3361a52-0a28-4be5-8216-a324cbba0c60","Type":"ContainerDied","Data":"0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1"} Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.112916 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5g629" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.117636 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g82c4" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.158891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.331803 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkfhr"] Jan 21 17:19:06 crc kubenswrapper[4823]: W0121 17:19:06.372024 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfcdf50c_881a_40d0_aa3b_cac833d1d9e6.slice/crio-897b2fcdeaf147bc169e3deaf1e8b03a381f2a9bca6edd1df0409233eecfa80b WatchSource:0}: Error finding container 897b2fcdeaf147bc169e3deaf1e8b03a381f2a9bca6edd1df0409233eecfa80b: Status 404 returned error can't find the container with id 897b2fcdeaf147bc169e3deaf1e8b03a381f2a9bca6edd1df0409233eecfa80b Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.379655 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.380340 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.383325 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.386558 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.417528 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.451466 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.455555 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:06 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:06 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:06 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.455610 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.467423 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.467479 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.568185 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.568238 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.568403 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.599708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.726695 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:06 crc kubenswrapper[4823]: I0121 17:19:06.825006 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hw2m8"] Jan 21 17:19:06 crc kubenswrapper[4823]: W0121 17:19:06.873790 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133279e8_0382_4c22_aee4_423701729b21.slice/crio-8d7d2f65ee392f1b2a3051852bc80cab1cb68f5aa280c62c50e3c50723aa5c3f WatchSource:0}: Error finding container 8d7d2f65ee392f1b2a3051852bc80cab1cb68f5aa280c62c50e3c50723aa5c3f: Status 404 returned error can't find the container with id 8d7d2f65ee392f1b2a3051852bc80cab1cb68f5aa280c62c50e3c50723aa5c3f Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.126516 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerID="2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6" exitCode=0 Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.126589 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkfhr" event={"ID":"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6","Type":"ContainerDied","Data":"2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6"} Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.126615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkfhr" event={"ID":"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6","Type":"ContainerStarted","Data":"897b2fcdeaf147bc169e3deaf1e8b03a381f2a9bca6edd1df0409233eecfa80b"} Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.130693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerStarted","Data":"8d7d2f65ee392f1b2a3051852bc80cab1cb68f5aa280c62c50e3c50723aa5c3f"} Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.133701 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3868534b4fffef60c126e038dfa517aae63b7dc7508aeeba96dd38adcaedfbb7"} Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.137579 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0291d8750b5096bad4653556b93a0416f8a0d9ad95d46dd2f0914230818aec3e"} Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.144980 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.231507 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.453376 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:07 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:07 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:07 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:07 crc kubenswrapper[4823]: I0121 17:19:07.453720 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:08 crc kubenswrapper[4823]: I0121 17:19:08.148741 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21486af1-8bbf-40e1-9e64-ef3bde9664f1","Type":"ContainerStarted","Data":"c16bbc72fb3ddfad39648a1fae4f81dab137b02ed793f131afee442037bb54e5"} Jan 21 17:19:08 crc kubenswrapper[4823]: I0121 17:19:08.167190 4823 generic.go:334] "Generic (PLEG): container finished" podID="133279e8-0382-4c22-aee4-423701729b21" containerID="42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050" exitCode=0 Jan 21 17:19:08 crc kubenswrapper[4823]: I0121 17:19:08.167815 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerDied","Data":"42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050"} Jan 21 17:19:08 crc kubenswrapper[4823]: I0121 17:19:08.457240 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:08 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:08 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:08 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:08 crc kubenswrapper[4823]: I0121 17:19:08.457308 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:09 crc kubenswrapper[4823]: I0121 17:19:09.182181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21486af1-8bbf-40e1-9e64-ef3bde9664f1","Type":"ContainerStarted","Data":"6f2734a77196469cb1acac30f6619753493aa1e8ea109cf9cd4ced8c41db0f34"} Jan 21 17:19:09 crc kubenswrapper[4823]: I0121 17:19:09.452213 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:09 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:09 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:09 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:09 crc kubenswrapper[4823]: I0121 17:19:09.452262 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.192717 4823 generic.go:334] "Generic (PLEG): container finished" podID="21486af1-8bbf-40e1-9e64-ef3bde9664f1" containerID="6f2734a77196469cb1acac30f6619753493aa1e8ea109cf9cd4ced8c41db0f34" exitCode=0 Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.192805 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21486af1-8bbf-40e1-9e64-ef3bde9664f1","Type":"ContainerDied","Data":"6f2734a77196469cb1acac30f6619753493aa1e8ea109cf9cd4ced8c41db0f34"} Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.451732 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:10 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:10 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:10 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.451805 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.539117 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.540179 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.542414 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.543677 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.575798 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.727960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a3826-b3e3-487d-9dc1-c33b564f246a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.728305 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a3826-b3e3-487d-9dc1-c33b564f246a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.829349 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a3826-b3e3-487d-9dc1-c33b564f246a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.829642 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a3826-b3e3-487d-9dc1-c33b564f246a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.829730 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a3826-b3e3-487d-9dc1-c33b564f246a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.861187 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a3826-b3e3-487d-9dc1-c33b564f246a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:10 crc kubenswrapper[4823]: I0121 17:19:10.869965 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.453308 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.461897 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:11 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:11 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:11 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.461954 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:11 crc kubenswrapper[4823]: W0121 17:19:11.481971 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda55a3826_b3e3_487d_9dc1_c33b564f246a.slice/crio-e45b24a33ca2f8f236288240278a755ae79572d344811884bc7f3a1b8c4c9049 WatchSource:0}: Error finding container e45b24a33ca2f8f236288240278a755ae79572d344811884bc7f3a1b8c4c9049: Status 404 returned error can't find the container with id e45b24a33ca2f8f236288240278a755ae79572d344811884bc7f3a1b8c4c9049 Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.537061 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.551987 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kubelet-dir\") pod \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.552055 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kube-api-access\") pod \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\" (UID: \"21486af1-8bbf-40e1-9e64-ef3bde9664f1\") " Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.553688 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "21486af1-8bbf-40e1-9e64-ef3bde9664f1" (UID: "21486af1-8bbf-40e1-9e64-ef3bde9664f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.574453 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "21486af1-8bbf-40e1-9e64-ef3bde9664f1" (UID: "21486af1-8bbf-40e1-9e64-ef3bde9664f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.653556 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.653601 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21486af1-8bbf-40e1-9e64-ef3bde9664f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:11 crc kubenswrapper[4823]: I0121 17:19:11.707810 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nxscj" Jan 21 17:19:12 crc kubenswrapper[4823]: I0121 17:19:12.208417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21486af1-8bbf-40e1-9e64-ef3bde9664f1","Type":"ContainerDied","Data":"c16bbc72fb3ddfad39648a1fae4f81dab137b02ed793f131afee442037bb54e5"} Jan 21 17:19:12 crc kubenswrapper[4823]: I0121 17:19:12.208703 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16bbc72fb3ddfad39648a1fae4f81dab137b02ed793f131afee442037bb54e5" Jan 21 17:19:12 crc kubenswrapper[4823]: I0121 17:19:12.208431 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 17:19:12 crc kubenswrapper[4823]: I0121 17:19:12.210180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a55a3826-b3e3-487d-9dc1-c33b564f246a","Type":"ContainerStarted","Data":"e45b24a33ca2f8f236288240278a755ae79572d344811884bc7f3a1b8c4c9049"} Jan 21 17:19:12 crc kubenswrapper[4823]: I0121 17:19:12.451924 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:12 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:12 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:12 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:12 crc kubenswrapper[4823]: I0121 17:19:12.451988 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:13 crc kubenswrapper[4823]: I0121 17:19:13.222200 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:19:13 crc kubenswrapper[4823]: I0121 17:19:13.452592 4823 patch_prober.go:28] interesting pod/router-default-5444994796-vttrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 17:19:13 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 21 17:19:13 crc kubenswrapper[4823]: [+]process-running ok Jan 21 17:19:13 crc kubenswrapper[4823]: healthz check failed Jan 21 17:19:13 crc kubenswrapper[4823]: I0121 17:19:13.452648 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttrq" podUID="18f96219-7979-4162-8a45-7439bdb10075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:14 crc kubenswrapper[4823]: I0121 17:19:14.228173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a55a3826-b3e3-487d-9dc1-c33b564f246a","Type":"ContainerStarted","Data":"02dbcdf82672b6f7062a048f73de78e13d53147f3fb7839c33ecbde54b1987f0"} Jan 21 17:19:14 crc kubenswrapper[4823]: I0121 17:19:14.739077 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:19:14 crc kubenswrapper[4823]: I0121 17:19:14.743107 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vttrq" Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.070706 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.070775 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.136717 4823 patch_prober.go:28] interesting pod/console-f9d7485db-4dj4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.136783 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4dj4t" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.160293 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jmjt6" Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.239978 4823 generic.go:334] "Generic (PLEG): container finished" podID="a55a3826-b3e3-487d-9dc1-c33b564f246a" containerID="02dbcdf82672b6f7062a048f73de78e13d53147f3fb7839c33ecbde54b1987f0" exitCode=0 Jan 21 17:19:15 crc kubenswrapper[4823]: I0121 17:19:15.240495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a55a3826-b3e3-487d-9dc1-c33b564f246a","Type":"ContainerDied","Data":"02dbcdf82672b6f7062a048f73de78e13d53147f3fb7839c33ecbde54b1987f0"} Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.883498 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.890909 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bcd33a4-ea1e-4977-8456-e34f2ed4c680-metrics-certs\") pod \"network-metrics-daemon-htjnl\" (UID: \"9bcd33a4-ea1e-4977-8456-e34f2ed4c680\") " pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.910979 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.985426 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a3826-b3e3-487d-9dc1-c33b564f246a-kubelet-dir\") pod \"a55a3826-b3e3-487d-9dc1-c33b564f246a\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.985577 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a3826-b3e3-487d-9dc1-c33b564f246a-kube-api-access\") pod \"a55a3826-b3e3-487d-9dc1-c33b564f246a\" (UID: \"a55a3826-b3e3-487d-9dc1-c33b564f246a\") " Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.987756 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55a3826-b3e3-487d-9dc1-c33b564f246a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a55a3826-b3e3-487d-9dc1-c33b564f246a" (UID: "a55a3826-b3e3-487d-9dc1-c33b564f246a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:19:19 crc kubenswrapper[4823]: I0121 17:19:19.999105 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55a3826-b3e3-487d-9dc1-c33b564f246a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a55a3826-b3e3-487d-9dc1-c33b564f246a" (UID: "a55a3826-b3e3-487d-9dc1-c33b564f246a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:19:20 crc kubenswrapper[4823]: I0121 17:19:20.056943 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-htjnl" Jan 21 17:19:20 crc kubenswrapper[4823]: I0121 17:19:20.087066 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a3826-b3e3-487d-9dc1-c33b564f246a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:20 crc kubenswrapper[4823]: I0121 17:19:20.087093 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a3826-b3e3-487d-9dc1-c33b564f246a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:20 crc kubenswrapper[4823]: I0121 17:19:20.270617 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a55a3826-b3e3-487d-9dc1-c33b564f246a","Type":"ContainerDied","Data":"e45b24a33ca2f8f236288240278a755ae79572d344811884bc7f3a1b8c4c9049"} Jan 21 17:19:20 crc kubenswrapper[4823]: I0121 17:19:20.270660 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45b24a33ca2f8f236288240278a755ae79572d344811884bc7f3a1b8c4c9049" Jan 21 17:19:20 crc kubenswrapper[4823]: I0121 17:19:20.270706 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 17:19:23 crc kubenswrapper[4823]: I0121 17:19:23.398898 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:19:25 crc kubenswrapper[4823]: I0121 17:19:25.140194 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:19:25 crc kubenswrapper[4823]: I0121 17:19:25.144934 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:19:36 crc kubenswrapper[4823]: I0121 17:19:36.837763 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knqcz" Jan 21 17:19:39 crc kubenswrapper[4823]: E0121 17:19:39.027354 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 17:19:39 crc kubenswrapper[4823]: E0121 17:19:39.027772 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95pzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pqnps_openshift-marketplace(8edc9d3c-22fb-492b-8f1a-c488667e0df0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:39 crc kubenswrapper[4823]: E0121 17:19:39.029264 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pqnps" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" Jan 21 17:19:40 crc kubenswrapper[4823]: E0121 17:19:40.080755 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pqnps" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" Jan 21 17:19:40 crc kubenswrapper[4823]: E0121 17:19:40.344720 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 17:19:40 crc kubenswrapper[4823]: E0121 17:19:40.344908 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bj985,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2hqjn_openshift-marketplace(25390b7d-a426-47af-b0c5-dfa4c6a64667): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:40 crc kubenswrapper[4823]: E0121 17:19:40.346074 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2hqjn" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.209222 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2hqjn" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.277835 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.278285 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lhmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hw2m8_openshift-marketplace(133279e8-0382-4c22-aee4-423701729b21): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.279683 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hw2m8" podUID="133279e8-0382-4c22-aee4-423701729b21" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.523919 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hw2m8" podUID="133279e8-0382-4c22-aee4-423701729b21" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.598076 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.598598 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tkfhr_openshift-marketplace(dfcdf50c-881a-40d0-aa3b-cac833d1d9e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:43 crc kubenswrapper[4823]: E0121 17:19:43.601468 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tkfhr" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.070396 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.070700 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.419471 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tkfhr" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.461958 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.549748 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.550049 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55a3826-b3e3-487d-9dc1-c33b564f246a" containerName="pruner" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.550066 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55a3826-b3e3-487d-9dc1-c33b564f246a" containerName="pruner" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.550081 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21486af1-8bbf-40e1-9e64-ef3bde9664f1" containerName="pruner" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.550089 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="21486af1-8bbf-40e1-9e64-ef3bde9664f1" containerName="pruner" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.550210 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55a3826-b3e3-487d-9dc1-c33b564f246a" containerName="pruner" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.550226 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="21486af1-8bbf-40e1-9e64-ef3bde9664f1" containerName="pruner" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.550674 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.556783 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.557365 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.568490 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.647004 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.647055 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.708782 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.709418 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcsbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zlnv2_openshift-marketplace(989e8ecd-3950-4494-b9af-911eeeed065c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.710799 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zlnv2" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.711167 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.711436 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbrfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9pg9m_openshift-marketplace(035531d0-ecfd-4d31-be47-08fc49762b7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.712535 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9pg9m" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.749566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.749645 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.749776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.773488 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.811030 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.811210 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kcnnz_openshift-marketplace(fef83eca-df2a-4e24-80fe-b8beb1b192c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:19:45 crc kubenswrapper[4823]: E0121 17:19:45.812272 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kcnnz" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.879442 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:45 crc kubenswrapper[4823]: I0121 17:19:45.973876 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-htjnl"] Jan 21 17:19:46 crc kubenswrapper[4823]: W0121 17:19:46.031943 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bcd33a4_ea1e_4977_8456_e34f2ed4c680.slice/crio-a1f470635b03c7ea50e223799e37329140dc142242af9efcd4683783bc06d7f0 WatchSource:0}: Error finding container a1f470635b03c7ea50e223799e37329140dc142242af9efcd4683783bc06d7f0: Status 404 returned error can't find the container with id a1f470635b03c7ea50e223799e37329140dc142242af9efcd4683783bc06d7f0 Jan 21 17:19:46 crc kubenswrapper[4823]: I0121 17:19:46.306896 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 17:19:46 crc kubenswrapper[4823]: W0121 17:19:46.313322 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podafa84d4e_8684_4c79_88ba_57f7adcd7d54.slice/crio-dc18c91e95063c1391c4c6aa891ed117a503187b2cca361ab10218cdef8b0ab7 WatchSource:0}: Error finding container dc18c91e95063c1391c4c6aa891ed117a503187b2cca361ab10218cdef8b0ab7: Status 404 returned error can't find the container with id dc18c91e95063c1391c4c6aa891ed117a503187b2cca361ab10218cdef8b0ab7 Jan 21 17:19:46 crc kubenswrapper[4823]: I0121 17:19:46.400420 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"afa84d4e-8684-4c79-88ba-57f7adcd7d54","Type":"ContainerStarted","Data":"dc18c91e95063c1391c4c6aa891ed117a503187b2cca361ab10218cdef8b0ab7"} Jan 21 17:19:46 crc kubenswrapper[4823]: I0121 17:19:46.408085 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-htjnl" event={"ID":"9bcd33a4-ea1e-4977-8456-e34f2ed4c680","Type":"ContainerStarted","Data":"67ba214b9b0563a54fa0d456519fbdee2df6555025916eccbee75077221fbee9"} Jan 21 17:19:46 crc kubenswrapper[4823]: I0121 17:19:46.408126 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-htjnl" event={"ID":"9bcd33a4-ea1e-4977-8456-e34f2ed4c680","Type":"ContainerStarted","Data":"a1f470635b03c7ea50e223799e37329140dc142242af9efcd4683783bc06d7f0"} Jan 21 17:19:46 crc kubenswrapper[4823]: I0121 17:19:46.411323 4823 generic.go:334] "Generic (PLEG): container finished" podID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerID="855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109" exitCode=0 Jan 21 17:19:46 crc kubenswrapper[4823]: I0121 17:19:46.411450 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gw98" event={"ID":"b3361a52-0a28-4be5-8216-a324cbba0c60","Type":"ContainerDied","Data":"855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109"} Jan 21 17:19:46 crc kubenswrapper[4823]: E0121 17:19:46.413617 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zlnv2" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" Jan 21 17:19:46 crc kubenswrapper[4823]: E0121 17:19:46.413789 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kcnnz" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" Jan 21 17:19:46 crc kubenswrapper[4823]: E0121 17:19:46.413965 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9pg9m" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" Jan 21 17:19:47 crc kubenswrapper[4823]: I0121 17:19:47.417411 4823 generic.go:334] "Generic (PLEG): container finished" podID="afa84d4e-8684-4c79-88ba-57f7adcd7d54" containerID="400899c74430ad76a65dff7bb50ec48e3558666894091f46fb27f78a3ab4edab" exitCode=0 Jan 21 17:19:47 crc kubenswrapper[4823]: I0121 17:19:47.417487 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"afa84d4e-8684-4c79-88ba-57f7adcd7d54","Type":"ContainerDied","Data":"400899c74430ad76a65dff7bb50ec48e3558666894091f46fb27f78a3ab4edab"} Jan 21 17:19:47 crc kubenswrapper[4823]: I0121 17:19:47.419226 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-htjnl" event={"ID":"9bcd33a4-ea1e-4977-8456-e34f2ed4c680","Type":"ContainerStarted","Data":"db6088ec59852ade612e8f381b2d094e77613fa12343cdc3c41e64c107f2eb8b"} Jan 21 17:19:47 crc kubenswrapper[4823]: I0121 17:19:47.421594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gw98" event={"ID":"b3361a52-0a28-4be5-8216-a324cbba0c60","Type":"ContainerStarted","Data":"4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204"} Jan 21 17:19:47 crc kubenswrapper[4823]: I0121 17:19:47.443692 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-htjnl" podStartSLOduration=169.443670262 podStartE2EDuration="2m49.443670262s" podCreationTimestamp="2026-01-21 17:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:47.443248312 +0000 UTC m=+188.369379202" watchObservedRunningTime="2026-01-21 17:19:47.443670262 +0000 UTC m=+188.369801152" Jan 21 17:19:47 crc kubenswrapper[4823]: I0121 17:19:47.460875 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5gw98" podStartSLOduration=2.586809579 podStartE2EDuration="43.460845367s" podCreationTimestamp="2026-01-21 17:19:04 +0000 UTC" firstStartedPulling="2026-01-21 17:19:06.108380816 +0000 UTC m=+147.034511676" lastFinishedPulling="2026-01-21 17:19:46.982416594 +0000 UTC m=+187.908547464" observedRunningTime="2026-01-21 17:19:47.460532799 +0000 UTC m=+188.386663669" watchObservedRunningTime="2026-01-21 17:19:47.460845367 +0000 UTC m=+188.386976227" Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.728274 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.790948 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kubelet-dir\") pod \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.791081 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "afa84d4e-8684-4c79-88ba-57f7adcd7d54" (UID: "afa84d4e-8684-4c79-88ba-57f7adcd7d54"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.791097 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kube-api-access\") pod \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\" (UID: \"afa84d4e-8684-4c79-88ba-57f7adcd7d54\") " Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.791388 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.797436 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "afa84d4e-8684-4c79-88ba-57f7adcd7d54" (UID: "afa84d4e-8684-4c79-88ba-57f7adcd7d54"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:19:48 crc kubenswrapper[4823]: I0121 17:19:48.892641 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afa84d4e-8684-4c79-88ba-57f7adcd7d54-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:19:49 crc kubenswrapper[4823]: I0121 17:19:49.431222 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"afa84d4e-8684-4c79-88ba-57f7adcd7d54","Type":"ContainerDied","Data":"dc18c91e95063c1391c4c6aa891ed117a503187b2cca361ab10218cdef8b0ab7"} Jan 21 17:19:49 crc kubenswrapper[4823]: I0121 17:19:49.431260 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc18c91e95063c1391c4c6aa891ed117a503187b2cca361ab10218cdef8b0ab7" Jan 21 17:19:49 crc kubenswrapper[4823]: I0121 17:19:49.431314 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.552290 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 17:19:51 crc kubenswrapper[4823]: E0121 17:19:51.553691 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa84d4e-8684-4c79-88ba-57f7adcd7d54" containerName="pruner" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.553768 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa84d4e-8684-4c79-88ba-57f7adcd7d54" containerName="pruner" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.553939 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa84d4e-8684-4c79-88ba-57f7adcd7d54" containerName="pruner" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.554352 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.556777 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.557042 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.557924 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.644420 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-var-lock\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.644741 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.644873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.746998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-var-lock\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.747109 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-var-lock\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.747209 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.747243 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.747294 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.771129 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:51 crc kubenswrapper[4823]: I0121 17:19:51.870751 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:19:52 crc kubenswrapper[4823]: I0121 17:19:52.262372 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 17:19:52 crc kubenswrapper[4823]: I0121 17:19:52.444917 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e34b3d5-6673-42ed-851b-ec7977fe71fc","Type":"ContainerStarted","Data":"c7812de7703641be7e750f60fdaa58b0bb7e9d6b406b19656eea08175b26568c"} Jan 21 17:19:54 crc kubenswrapper[4823]: I0121 17:19:54.455500 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e34b3d5-6673-42ed-851b-ec7977fe71fc","Type":"ContainerStarted","Data":"af943b61af43c3bec79b7dacbb08be7775c394c53e084de2a5782756125ca25d"} Jan 21 17:19:54 crc kubenswrapper[4823]: I0121 17:19:54.473938 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.473920804 podStartE2EDuration="3.473920804s" podCreationTimestamp="2026-01-21 17:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:19:54.472752275 +0000 UTC m=+195.398883145" watchObservedRunningTime="2026-01-21 17:19:54.473920804 +0000 UTC m=+195.400051664" Jan 21 17:19:54 crc kubenswrapper[4823]: I0121 17:19:54.686074 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8nkd"] Jan 21 17:19:54 crc kubenswrapper[4823]: I0121 17:19:54.773185 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:54 crc kubenswrapper[4823]: I0121 17:19:54.773885 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:54 crc kubenswrapper[4823]: I0121 17:19:54.880676 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:55 crc kubenswrapper[4823]: I0121 17:19:55.503696 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:19:56 crc kubenswrapper[4823]: I0121 17:19:56.467706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerStarted","Data":"e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d"} Jan 21 17:19:56 crc kubenswrapper[4823]: I0121 17:19:56.472651 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerStarted","Data":"8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6"} Jan 21 17:19:56 crc kubenswrapper[4823]: I0121 17:19:56.475246 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerStarted","Data":"519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01"} Jan 21 17:19:57 crc kubenswrapper[4823]: I0121 17:19:57.497879 4823 generic.go:334] "Generic (PLEG): container finished" podID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerID="8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6" exitCode=0 Jan 21 17:19:57 crc kubenswrapper[4823]: I0121 17:19:57.497952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerDied","Data":"8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6"} Jan 21 17:19:57 crc kubenswrapper[4823]: I0121 17:19:57.500598 4823 generic.go:334] "Generic (PLEG): container finished" podID="133279e8-0382-4c22-aee4-423701729b21" containerID="519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01" exitCode=0 Jan 21 17:19:57 crc kubenswrapper[4823]: I0121 17:19:57.500663 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerDied","Data":"519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01"} Jan 21 17:19:57 crc kubenswrapper[4823]: I0121 17:19:57.503811 4823 generic.go:334] "Generic (PLEG): container finished" podID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerID="e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d" exitCode=0 Jan 21 17:19:57 crc kubenswrapper[4823]: I0121 17:19:57.504830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerDied","Data":"e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d"} Jan 21 17:19:58 crc kubenswrapper[4823]: I0121 17:19:58.510762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerStarted","Data":"267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c"} Jan 21 17:19:58 crc kubenswrapper[4823]: I0121 17:19:58.514080 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerStarted","Data":"f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed"} Jan 21 17:19:58 crc kubenswrapper[4823]: I0121 17:19:58.515938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerStarted","Data":"807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f"} Jan 21 17:19:58 crc kubenswrapper[4823]: I0121 17:19:58.536487 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2hqjn" podStartSLOduration=2.740958445 podStartE2EDuration="54.536469198s" podCreationTimestamp="2026-01-21 17:19:04 +0000 UTC" firstStartedPulling="2026-01-21 17:19:06.105147814 +0000 UTC m=+147.031278674" lastFinishedPulling="2026-01-21 17:19:57.900658567 +0000 UTC m=+198.826789427" observedRunningTime="2026-01-21 17:19:58.526793161 +0000 UTC m=+199.452924021" watchObservedRunningTime="2026-01-21 17:19:58.536469198 +0000 UTC m=+199.462600058" Jan 21 17:19:58 crc kubenswrapper[4823]: I0121 17:19:58.549812 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pqnps" podStartSLOduration=2.667825686 podStartE2EDuration="56.549796476s" podCreationTimestamp="2026-01-21 17:19:02 +0000 UTC" firstStartedPulling="2026-01-21 17:19:04.023016303 +0000 UTC m=+144.949147163" lastFinishedPulling="2026-01-21 17:19:57.904987093 +0000 UTC m=+198.831117953" observedRunningTime="2026-01-21 17:19:58.548129595 +0000 UTC m=+199.474260455" watchObservedRunningTime="2026-01-21 17:19:58.549796476 +0000 UTC m=+199.475927336" Jan 21 17:19:58 crc kubenswrapper[4823]: I0121 17:19:58.574208 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hw2m8" podStartSLOduration=2.750252907 podStartE2EDuration="53.574192615s" podCreationTimestamp="2026-01-21 17:19:05 +0000 UTC" firstStartedPulling="2026-01-21 17:19:07.133113543 +0000 UTC m=+148.059244403" lastFinishedPulling="2026-01-21 17:19:57.957053251 +0000 UTC m=+198.883184111" observedRunningTime="2026-01-21 17:19:58.571075189 +0000 UTC m=+199.497206049" watchObservedRunningTime="2026-01-21 17:19:58.574192615 +0000 UTC m=+199.500323475" Jan 21 17:20:02 crc kubenswrapper[4823]: I0121 17:20:02.761659 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:20:02 crc kubenswrapper[4823]: I0121 17:20:02.762023 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:20:02 crc kubenswrapper[4823]: I0121 17:20:02.805349 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:20:03 crc kubenswrapper[4823]: I0121 17:20:03.587728 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:20:05 crc kubenswrapper[4823]: I0121 17:20:05.139511 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:20:05 crc kubenswrapper[4823]: I0121 17:20:05.140068 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:20:05 crc kubenswrapper[4823]: I0121 17:20:05.180552 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:20:05 crc kubenswrapper[4823]: I0121 17:20:05.612475 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:20:06 crc kubenswrapper[4823]: I0121 17:20:06.159915 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:20:06 crc kubenswrapper[4823]: I0121 17:20:06.160270 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:20:06 crc kubenswrapper[4823]: I0121 17:20:06.222797 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:20:06 crc kubenswrapper[4823]: I0121 17:20:06.610478 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:20:08 crc kubenswrapper[4823]: I0121 17:20:08.775382 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqjn"] Jan 21 17:20:08 crc kubenswrapper[4823]: I0121 17:20:08.775722 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2hqjn" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="registry-server" containerID="cri-o://267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c" gracePeriod=2 Jan 21 17:20:10 crc kubenswrapper[4823]: I0121 17:20:10.576912 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hw2m8"] Jan 21 17:20:10 crc kubenswrapper[4823]: I0121 17:20:10.577187 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hw2m8" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="registry-server" containerID="cri-o://f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed" gracePeriod=2 Jan 21 17:20:11 crc kubenswrapper[4823]: I0121 17:20:11.982390 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.017496 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-utilities\") pod \"133279e8-0382-4c22-aee4-423701729b21\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.017598 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lhmt\" (UniqueName: \"kubernetes.io/projected/133279e8-0382-4c22-aee4-423701729b21-kube-api-access-8lhmt\") pod \"133279e8-0382-4c22-aee4-423701729b21\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.017646 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-catalog-content\") pod \"133279e8-0382-4c22-aee4-423701729b21\" (UID: \"133279e8-0382-4c22-aee4-423701729b21\") " Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.018583 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-utilities" (OuterVolumeSpecName: "utilities") pod "133279e8-0382-4c22-aee4-423701729b21" (UID: "133279e8-0382-4c22-aee4-423701729b21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.023457 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133279e8-0382-4c22-aee4-423701729b21-kube-api-access-8lhmt" (OuterVolumeSpecName: "kube-api-access-8lhmt") pod "133279e8-0382-4c22-aee4-423701729b21" (UID: "133279e8-0382-4c22-aee4-423701729b21"). InnerVolumeSpecName "kube-api-access-8lhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.030729 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.118433 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj985\" (UniqueName: \"kubernetes.io/projected/25390b7d-a426-47af-b0c5-dfa4c6a64667-kube-api-access-bj985\") pod \"25390b7d-a426-47af-b0c5-dfa4c6a64667\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.118522 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-utilities\") pod \"25390b7d-a426-47af-b0c5-dfa4c6a64667\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.118553 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-catalog-content\") pod \"25390b7d-a426-47af-b0c5-dfa4c6a64667\" (UID: \"25390b7d-a426-47af-b0c5-dfa4c6a64667\") " Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.118735 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lhmt\" (UniqueName: \"kubernetes.io/projected/133279e8-0382-4c22-aee4-423701729b21-kube-api-access-8lhmt\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.118747 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.120042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-utilities" (OuterVolumeSpecName: "utilities") pod "25390b7d-a426-47af-b0c5-dfa4c6a64667" (UID: "25390b7d-a426-47af-b0c5-dfa4c6a64667"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.123065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25390b7d-a426-47af-b0c5-dfa4c6a64667-kube-api-access-bj985" (OuterVolumeSpecName: "kube-api-access-bj985") pod "25390b7d-a426-47af-b0c5-dfa4c6a64667" (UID: "25390b7d-a426-47af-b0c5-dfa4c6a64667"). InnerVolumeSpecName "kube-api-access-bj985". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.146047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25390b7d-a426-47af-b0c5-dfa4c6a64667" (UID: "25390b7d-a426-47af-b0c5-dfa4c6a64667"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.164060 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "133279e8-0382-4c22-aee4-423701729b21" (UID: "133279e8-0382-4c22-aee4-423701729b21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.220451 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj985\" (UniqueName: \"kubernetes.io/projected/25390b7d-a426-47af-b0c5-dfa4c6a64667-kube-api-access-bj985\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.220485 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.220495 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25390b7d-a426-47af-b0c5-dfa4c6a64667-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.220504 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133279e8-0382-4c22-aee4-423701729b21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.606930 4823 generic.go:334] "Generic (PLEG): container finished" podID="989e8ecd-3950-4494-b9af-911eeeed065c" containerID="82ff48353fa497ac88f83cf65d87c98d107cc002c63b1f9d25b379c543c85d79" exitCode=0 Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.607012 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnv2" event={"ID":"989e8ecd-3950-4494-b9af-911eeeed065c","Type":"ContainerDied","Data":"82ff48353fa497ac88f83cf65d87c98d107cc002c63b1f9d25b379c543c85d79"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.610665 4823 generic.go:334] "Generic (PLEG): container finished" podID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerID="0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d" exitCode=0 Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.610735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pg9m" event={"ID":"035531d0-ecfd-4d31-be47-08fc49762b7e","Type":"ContainerDied","Data":"0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.613149 4823 generic.go:334] "Generic (PLEG): container finished" podID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerID="267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c" exitCode=0 Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.613177 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqjn" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.613233 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerDied","Data":"267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.613265 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqjn" event={"ID":"25390b7d-a426-47af-b0c5-dfa4c6a64667","Type":"ContainerDied","Data":"0f4b72422a2b6c14e678a4f7fcb90133324ca584d218720450dc78023f34bb7c"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.613286 4823 scope.go:117] "RemoveContainer" containerID="267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.620754 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerID="609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd" exitCode=0 Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.621303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkfhr" event={"ID":"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6","Type":"ContainerDied","Data":"609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.628066 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw2m8" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.628110 4823 generic.go:334] "Generic (PLEG): container finished" podID="133279e8-0382-4c22-aee4-423701729b21" containerID="f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed" exitCode=0 Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.628192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerDied","Data":"f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.628285 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw2m8" event={"ID":"133279e8-0382-4c22-aee4-423701729b21","Type":"ContainerDied","Data":"8d7d2f65ee392f1b2a3051852bc80cab1cb68f5aa280c62c50e3c50723aa5c3f"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.635840 4823 generic.go:334] "Generic (PLEG): container finished" podID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerID="adc417f66b8494aa779669053c93301779cc3fff180a1ade0565209d9b059892" exitCode=0 Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.635910 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnnz" event={"ID":"fef83eca-df2a-4e24-80fe-b8beb1b192c6","Type":"ContainerDied","Data":"adc417f66b8494aa779669053c93301779cc3fff180a1ade0565209d9b059892"} Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.643815 4823 scope.go:117] "RemoveContainer" containerID="8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.674529 4823 scope.go:117] "RemoveContainer" containerID="d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.694955 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqjn"] Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.698447 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqjn"] Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.707513 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hw2m8"] Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.710178 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hw2m8"] Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.733144 4823 scope.go:117] "RemoveContainer" containerID="267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c" Jan 21 17:20:12 crc kubenswrapper[4823]: E0121 17:20:12.733590 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c\": container with ID starting with 267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c not found: ID does not exist" containerID="267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.733611 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c"} err="failed to get container status \"267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c\": rpc error: code = NotFound desc = could not find container \"267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c\": container with ID starting with 267301f079cea1bc70413a8d008583ad53c10085fdb9b6a316a598499700dc4c not found: ID does not exist" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.733644 4823 scope.go:117] "RemoveContainer" containerID="8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6" Jan 21 17:20:12 crc kubenswrapper[4823]: E0121 17:20:12.733940 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6\": container with ID starting with 8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6 not found: ID does not exist" containerID="8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.733961 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6"} err="failed to get container status \"8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6\": rpc error: code = NotFound desc = could not find container \"8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6\": container with ID starting with 8f7095f8e1d29e9d59db7b2bc5f2e5cb9e740df60114f3a510b9ae25d0d90bd6 not found: ID does not exist" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.733979 4823 scope.go:117] "RemoveContainer" containerID="d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711" Jan 21 17:20:12 crc kubenswrapper[4823]: E0121 17:20:12.734320 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711\": container with ID starting with d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711 not found: ID does not exist" containerID="d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.734341 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711"} err="failed to get container status \"d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711\": rpc error: code = NotFound desc = could not find container \"d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711\": container with ID starting with d4f136c7a19be532bc539556bfc3be6c7f24ccfb913efaf9f8ecb4496f8f2711 not found: ID does not exist" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.734352 4823 scope.go:117] "RemoveContainer" containerID="f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.753875 4823 scope.go:117] "RemoveContainer" containerID="519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.776677 4823 scope.go:117] "RemoveContainer" containerID="42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.796927 4823 scope.go:117] "RemoveContainer" containerID="f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed" Jan 21 17:20:12 crc kubenswrapper[4823]: E0121 17:20:12.798340 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed\": container with ID starting with f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed not found: ID does not exist" containerID="f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.798384 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed"} err="failed to get container status \"f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed\": rpc error: code = NotFound desc = could not find container \"f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed\": container with ID starting with f8ae45636dad5cc2d9e935d7e15fc3799299a05b96bd9c1b93da09bea48a99ed not found: ID does not exist" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.798439 4823 scope.go:117] "RemoveContainer" containerID="519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01" Jan 21 17:20:12 crc kubenswrapper[4823]: E0121 17:20:12.798793 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01\": container with ID starting with 519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01 not found: ID does not exist" containerID="519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.798844 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01"} err="failed to get container status \"519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01\": rpc error: code = NotFound desc = could not find container \"519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01\": container with ID starting with 519d378186029cf4dd81d00dfdfbbeb969a90aceb129f4abee1411d0928afc01 not found: ID does not exist" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.798887 4823 scope.go:117] "RemoveContainer" containerID="42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050" Jan 21 17:20:12 crc kubenswrapper[4823]: E0121 17:20:12.799306 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050\": container with ID starting with 42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050 not found: ID does not exist" containerID="42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050" Jan 21 17:20:12 crc kubenswrapper[4823]: I0121 17:20:12.799390 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050"} err="failed to get container status \"42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050\": rpc error: code = NotFound desc = could not find container \"42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050\": container with ID starting with 42a1a576b27a56f05f9d6b5df9fc922f658336f98660c860c07019a1467e3050 not found: ID does not exist" Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.351079 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133279e8-0382-4c22-aee4-423701729b21" path="/var/lib/kubelet/pods/133279e8-0382-4c22-aee4-423701729b21/volumes" Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.351922 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" path="/var/lib/kubelet/pods/25390b7d-a426-47af-b0c5-dfa4c6a64667/volumes" Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.643158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnnz" event={"ID":"fef83eca-df2a-4e24-80fe-b8beb1b192c6","Type":"ContainerStarted","Data":"25d2e2258607137e51dc3aef01e63476387a37723a7ab3af0e2e736b2de64cd7"} Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.645730 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnv2" event={"ID":"989e8ecd-3950-4494-b9af-911eeeed065c","Type":"ContainerStarted","Data":"06f369498d2375a20d630a5f07a26e47a8ddded228394a8dd69590cc7f430a24"} Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.647873 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pg9m" event={"ID":"035531d0-ecfd-4d31-be47-08fc49762b7e","Type":"ContainerStarted","Data":"8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe"} Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.651478 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkfhr" event={"ID":"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6","Type":"ContainerStarted","Data":"559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e"} Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.692903 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kcnnz" podStartSLOduration=2.667633435 podStartE2EDuration="1m11.692837635s" podCreationTimestamp="2026-01-21 17:19:02 +0000 UTC" firstStartedPulling="2026-01-21 17:19:04.02725885 +0000 UTC m=+144.953389710" lastFinishedPulling="2026-01-21 17:20:13.05246305 +0000 UTC m=+213.978593910" observedRunningTime="2026-01-21 17:20:13.672520426 +0000 UTC m=+214.598651296" watchObservedRunningTime="2026-01-21 17:20:13.692837635 +0000 UTC m=+214.618968495" Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.713075 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pg9m" podStartSLOduration=2.715036965 podStartE2EDuration="1m11.713053992s" podCreationTimestamp="2026-01-21 17:19:02 +0000 UTC" firstStartedPulling="2026-01-21 17:19:04.020967241 +0000 UTC m=+144.947098101" lastFinishedPulling="2026-01-21 17:20:13.018984268 +0000 UTC m=+213.945115128" observedRunningTime="2026-01-21 17:20:13.709247468 +0000 UTC m=+214.635378338" watchObservedRunningTime="2026-01-21 17:20:13.713053992 +0000 UTC m=+214.639184862" Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.713832 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlnv2" podStartSLOduration=2.6590201049999997 podStartE2EDuration="1m11.713827011s" podCreationTimestamp="2026-01-21 17:19:02 +0000 UTC" firstStartedPulling="2026-01-21 17:19:04.01699117 +0000 UTC m=+144.943122030" lastFinishedPulling="2026-01-21 17:20:13.071798076 +0000 UTC m=+213.997928936" observedRunningTime="2026-01-21 17:20:13.693892551 +0000 UTC m=+214.620023411" watchObservedRunningTime="2026-01-21 17:20:13.713827011 +0000 UTC m=+214.639957871" Jan 21 17:20:13 crc kubenswrapper[4823]: I0121 17:20:13.729560 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tkfhr" podStartSLOduration=2.7582879829999998 podStartE2EDuration="1m8.729545427s" podCreationTimestamp="2026-01-21 17:19:05 +0000 UTC" firstStartedPulling="2026-01-21 17:19:07.131215585 +0000 UTC m=+148.057346445" lastFinishedPulling="2026-01-21 17:20:13.102473029 +0000 UTC m=+214.028603889" observedRunningTime="2026-01-21 17:20:13.727049806 +0000 UTC m=+214.653180676" watchObservedRunningTime="2026-01-21 17:20:13.729545427 +0000 UTC m=+214.655676287" Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.075752 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.075808 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.075864 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.076420 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.076464 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04" gracePeriod=600 Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.663458 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04" exitCode=0 Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.663546 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04"} Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.663817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"5b8c1a02b532d1e17f35eddd81a5df62e89e85b8c0ba2bf0662dc871e80c0ac4"} Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.744046 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:20:15 crc kubenswrapper[4823]: I0121 17:20:15.744111 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:20:16 crc kubenswrapper[4823]: I0121 17:20:16.782228 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tkfhr" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="registry-server" probeResult="failure" output=< Jan 21 17:20:16 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 21 17:20:16 crc kubenswrapper[4823]: > Jan 21 17:20:19 crc kubenswrapper[4823]: I0121 17:20:19.722981 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" podUID="b61875bd-8183-49ad-a085-7478b5b97ca8" containerName="oauth-openshift" containerID="cri-o://06c0f3c72b2f2e10e9b7be0c65a32a2fa965ac64c274a624802806917386ec93" gracePeriod=15 Jan 21 17:20:21 crc kubenswrapper[4823]: I0121 17:20:21.695368 4823 generic.go:334] "Generic (PLEG): container finished" podID="b61875bd-8183-49ad-a085-7478b5b97ca8" containerID="06c0f3c72b2f2e10e9b7be0c65a32a2fa965ac64c274a624802806917386ec93" exitCode=0 Jan 21 17:20:21 crc kubenswrapper[4823]: I0121 17:20:21.695472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" event={"ID":"b61875bd-8183-49ad-a085-7478b5b97ca8","Type":"ContainerDied","Data":"06c0f3c72b2f2e10e9b7be0c65a32a2fa965ac64c274a624802806917386ec93"} Jan 21 17:20:21 crc kubenswrapper[4823]: I0121 17:20:21.997566 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.035837 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55c7db9594-xnxvf"] Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036156 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="extract-utilities" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036175 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="extract-utilities" Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036197 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="registry-server" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036208 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="registry-server" Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036224 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="extract-content" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036235 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="extract-content" Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036253 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="registry-server" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036263 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="registry-server" Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036278 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61875bd-8183-49ad-a085-7478b5b97ca8" containerName="oauth-openshift" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036288 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61875bd-8183-49ad-a085-7478b5b97ca8" containerName="oauth-openshift" Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036302 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="extract-utilities" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036312 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="extract-utilities" Jan 21 17:20:22 crc kubenswrapper[4823]: E0121 17:20:22.036338 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="extract-content" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036348 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="extract-content" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036497 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="133279e8-0382-4c22-aee4-423701729b21" containerName="registry-server" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036519 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="25390b7d-a426-47af-b0c5-dfa4c6a64667" containerName="registry-server" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.036540 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61875bd-8183-49ad-a085-7478b5b97ca8" containerName="oauth-openshift" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.037081 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.044277 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.050709 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-cliconfig\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.050822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-idp-0-file-data\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.050889 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-router-certs\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.050931 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-dir\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.050966 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-service-ca\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.050998 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-trusted-ca-bundle\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051033 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-session\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051059 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-error\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051108 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-provider-selection\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051138 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-policies\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbvng\" (UniqueName: \"kubernetes.io/projected/b61875bd-8183-49ad-a085-7478b5b97ca8-kube-api-access-vbvng\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051195 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-serving-cert\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051236 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-login\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051260 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-ocp-branding-template\") pod \"b61875bd-8183-49ad-a085-7478b5b97ca8\" (UID: \"b61875bd-8183-49ad-a085-7478b5b97ca8\") " Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051398 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-audit-dir\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051441 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051466 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051555 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-audit-policies\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzks4\" (UniqueName: \"kubernetes.io/projected/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-kube-api-access-wzks4\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051642 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-session\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051669 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051717 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051751 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051790 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051817 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051889 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051922 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.051973 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.053274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7db9594-xnxvf"] Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.055803 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.056210 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.056509 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.059659 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.059990 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.060282 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.061874 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.065455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.070029 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.068122 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61875bd-8183-49ad-a085-7478b5b97ca8-kube-api-access-vbvng" (OuterVolumeSpecName: "kube-api-access-vbvng") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "kube-api-access-vbvng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.072156 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.073286 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.085337 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b61875bd-8183-49ad-a085-7478b5b97ca8" (UID: "b61875bd-8183-49ad-a085-7478b5b97ca8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152593 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152692 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152752 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-audit-dir\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152780 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152804 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152828 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-audit-policies\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152910 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-audit-dir\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzks4\" (UniqueName: \"kubernetes.io/projected/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-kube-api-access-wzks4\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.152967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-session\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153016 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153200 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153220 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153239 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153276 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153297 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153317 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153335 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153350 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153363 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153376 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153390 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b61875bd-8183-49ad-a085-7478b5b97ca8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153404 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b61875bd-8183-49ad-a085-7478b5b97ca8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153417 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbvng\" (UniqueName: \"kubernetes.io/projected/b61875bd-8183-49ad-a085-7478b5b97ca8-kube-api-access-vbvng\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153670 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.153700 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.154132 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.154674 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-audit-policies\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.156562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.157270 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.157452 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.157452 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.157991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-session\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.158742 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.159418 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.159845 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.181477 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzks4\" (UniqueName: \"kubernetes.io/projected/9c218ef0-9e53-4ba4-9354-de2cc9f914ea-kube-api-access-wzks4\") pod \"oauth-openshift-55c7db9594-xnxvf\" (UID: \"9c218ef0-9e53-4ba4-9354-de2cc9f914ea\") " pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.402734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.557261 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.557613 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.627288 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.702924 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.702944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8nkd" event={"ID":"b61875bd-8183-49ad-a085-7478b5b97ca8","Type":"ContainerDied","Data":"65587d7dd1c11e78502a42457e924c45571e53127404641ae04c131e1bd9c3ca"} Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.702983 4823 scope.go:117] "RemoveContainer" containerID="06c0f3c72b2f2e10e9b7be0c65a32a2fa965ac64c274a624802806917386ec93" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.731943 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8nkd"] Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.735163 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8nkd"] Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.743037 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:20:22 crc kubenswrapper[4823]: I0121 17:20:22.842789 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7db9594-xnxvf"] Jan 21 17:20:22 crc kubenswrapper[4823]: W0121 17:20:22.849455 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c218ef0_9e53_4ba4_9354_de2cc9f914ea.slice/crio-63326a1b9cc2f28f1d0427f9f3e197b1a0de438b6919401f7255d03e7f9da832 WatchSource:0}: Error finding container 63326a1b9cc2f28f1d0427f9f3e197b1a0de438b6919401f7255d03e7f9da832: Status 404 returned error can't find the container with id 63326a1b9cc2f28f1d0427f9f3e197b1a0de438b6919401f7255d03e7f9da832 Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.044354 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.044412 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.083476 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.180040 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.180103 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.219648 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.351391 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61875bd-8183-49ad-a085-7478b5b97ca8" path="/var/lib/kubelet/pods/b61875bd-8183-49ad-a085-7478b5b97ca8/volumes" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.710588 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" event={"ID":"9c218ef0-9e53-4ba4-9354-de2cc9f914ea","Type":"ContainerStarted","Data":"704c2a0e4e7e70b24fbb9bd75cbda750e0721aeb7f3eb46359711aadf22ba7b6"} Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.711807 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" event={"ID":"9c218ef0-9e53-4ba4-9354-de2cc9f914ea","Type":"ContainerStarted","Data":"63326a1b9cc2f28f1d0427f9f3e197b1a0de438b6919401f7255d03e7f9da832"} Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.764979 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.774303 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:20:23 crc kubenswrapper[4823]: I0121 17:20:23.785099 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" podStartSLOduration=29.785073312 podStartE2EDuration="29.785073312s" podCreationTimestamp="2026-01-21 17:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:20:23.741220484 +0000 UTC m=+224.667351344" watchObservedRunningTime="2026-01-21 17:20:23.785073312 +0000 UTC m=+224.711204192" Jan 21 17:20:24 crc kubenswrapper[4823]: I0121 17:20:24.715984 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:24 crc kubenswrapper[4823]: I0121 17:20:24.720299 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55c7db9594-xnxvf" Jan 21 17:20:25 crc kubenswrapper[4823]: I0121 17:20:25.790429 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:20:25 crc kubenswrapper[4823]: I0121 17:20:25.835754 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:20:26 crc kubenswrapper[4823]: I0121 17:20:26.383215 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlnv2"] Jan 21 17:20:26 crc kubenswrapper[4823]: I0121 17:20:26.383651 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlnv2" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="registry-server" containerID="cri-o://06f369498d2375a20d630a5f07a26e47a8ddded228394a8dd69590cc7f430a24" gracePeriod=2 Jan 21 17:20:26 crc kubenswrapper[4823]: I0121 17:20:26.579109 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcnnz"] Jan 21 17:20:26 crc kubenswrapper[4823]: I0121 17:20:26.579444 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kcnnz" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="registry-server" containerID="cri-o://25d2e2258607137e51dc3aef01e63476387a37723a7ab3af0e2e736b2de64cd7" gracePeriod=2 Jan 21 17:20:27 crc kubenswrapper[4823]: I0121 17:20:27.737593 4823 generic.go:334] "Generic (PLEG): container finished" podID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerID="25d2e2258607137e51dc3aef01e63476387a37723a7ab3af0e2e736b2de64cd7" exitCode=0 Jan 21 17:20:27 crc kubenswrapper[4823]: I0121 17:20:27.737666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnnz" event={"ID":"fef83eca-df2a-4e24-80fe-b8beb1b192c6","Type":"ContainerDied","Data":"25d2e2258607137e51dc3aef01e63476387a37723a7ab3af0e2e736b2de64cd7"} Jan 21 17:20:27 crc kubenswrapper[4823]: I0121 17:20:27.740365 4823 generic.go:334] "Generic (PLEG): container finished" podID="989e8ecd-3950-4494-b9af-911eeeed065c" containerID="06f369498d2375a20d630a5f07a26e47a8ddded228394a8dd69590cc7f430a24" exitCode=0 Jan 21 17:20:27 crc kubenswrapper[4823]: I0121 17:20:27.740411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnv2" event={"ID":"989e8ecd-3950-4494-b9af-911eeeed065c","Type":"ContainerDied","Data":"06f369498d2375a20d630a5f07a26e47a8ddded228394a8dd69590cc7f430a24"} Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.046438 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.126333 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-utilities\") pod \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.126434 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-catalog-content\") pod \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.126553 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbnxn\" (UniqueName: \"kubernetes.io/projected/fef83eca-df2a-4e24-80fe-b8beb1b192c6-kube-api-access-jbnxn\") pod \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\" (UID: \"fef83eca-df2a-4e24-80fe-b8beb1b192c6\") " Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.127547 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-utilities" (OuterVolumeSpecName: "utilities") pod "fef83eca-df2a-4e24-80fe-b8beb1b192c6" (UID: "fef83eca-df2a-4e24-80fe-b8beb1b192c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.142905 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef83eca-df2a-4e24-80fe-b8beb1b192c6-kube-api-access-jbnxn" (OuterVolumeSpecName: "kube-api-access-jbnxn") pod "fef83eca-df2a-4e24-80fe-b8beb1b192c6" (UID: "fef83eca-df2a-4e24-80fe-b8beb1b192c6"). InnerVolumeSpecName "kube-api-access-jbnxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.174748 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fef83eca-df2a-4e24-80fe-b8beb1b192c6" (UID: "fef83eca-df2a-4e24-80fe-b8beb1b192c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.228378 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbnxn\" (UniqueName: \"kubernetes.io/projected/fef83eca-df2a-4e24-80fe-b8beb1b192c6-kube-api-access-jbnxn\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.228411 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.228420 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef83eca-df2a-4e24-80fe-b8beb1b192c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.548379 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.633131 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcsbn\" (UniqueName: \"kubernetes.io/projected/989e8ecd-3950-4494-b9af-911eeeed065c-kube-api-access-mcsbn\") pod \"989e8ecd-3950-4494-b9af-911eeeed065c\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.633484 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-utilities\") pod \"989e8ecd-3950-4494-b9af-911eeeed065c\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.633508 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-catalog-content\") pod \"989e8ecd-3950-4494-b9af-911eeeed065c\" (UID: \"989e8ecd-3950-4494-b9af-911eeeed065c\") " Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.634554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-utilities" (OuterVolumeSpecName: "utilities") pod "989e8ecd-3950-4494-b9af-911eeeed065c" (UID: "989e8ecd-3950-4494-b9af-911eeeed065c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.638477 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989e8ecd-3950-4494-b9af-911eeeed065c-kube-api-access-mcsbn" (OuterVolumeSpecName: "kube-api-access-mcsbn") pod "989e8ecd-3950-4494-b9af-911eeeed065c" (UID: "989e8ecd-3950-4494-b9af-911eeeed065c"). InnerVolumeSpecName "kube-api-access-mcsbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.712540 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "989e8ecd-3950-4494-b9af-911eeeed065c" (UID: "989e8ecd-3950-4494-b9af-911eeeed065c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.735136 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.735189 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ecd-3950-4494-b9af-911eeeed065c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.735207 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcsbn\" (UniqueName: \"kubernetes.io/projected/989e8ecd-3950-4494-b9af-911eeeed065c-kube-api-access-mcsbn\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.748677 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnv2" event={"ID":"989e8ecd-3950-4494-b9af-911eeeed065c","Type":"ContainerDied","Data":"ec417db2d87bf3505e5a60c44fd643f94720a4e1ab5fa864d63de9fad2fe7386"} Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.748741 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnv2" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.748784 4823 scope.go:117] "RemoveContainer" containerID="06f369498d2375a20d630a5f07a26e47a8ddded228394a8dd69590cc7f430a24" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.752281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnnz" event={"ID":"fef83eca-df2a-4e24-80fe-b8beb1b192c6","Type":"ContainerDied","Data":"144823a512a7a1f9ee148da6964d35434148b3bcfebae3fb7f1fccac32ff9175"} Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.752403 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnnz" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.774080 4823 scope.go:117] "RemoveContainer" containerID="82ff48353fa497ac88f83cf65d87c98d107cc002c63b1f9d25b379c543c85d79" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.788149 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlnv2"] Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.796584 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlnv2"] Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.804956 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcnnz"] Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.809025 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kcnnz"] Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.817185 4823 scope.go:117] "RemoveContainer" containerID="509b02b98a0435a4d9901cb18edd3472350a3d2055fab56c1e8c006ecc296a10" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.834807 4823 scope.go:117] "RemoveContainer" containerID="25d2e2258607137e51dc3aef01e63476387a37723a7ab3af0e2e736b2de64cd7" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.854225 4823 scope.go:117] "RemoveContainer" containerID="adc417f66b8494aa779669053c93301779cc3fff180a1ade0565209d9b059892" Jan 21 17:20:28 crc kubenswrapper[4823]: I0121 17:20:28.874896 4823 scope.go:117] "RemoveContainer" containerID="d6f013544d7b425a6fb5b86371be81ec25b6ecb9dd921bb82f090565c14055b2" Jan 21 17:20:29 crc kubenswrapper[4823]: I0121 17:20:29.350221 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" path="/var/lib/kubelet/pods/989e8ecd-3950-4494-b9af-911eeeed065c/volumes" Jan 21 17:20:29 crc kubenswrapper[4823]: I0121 17:20:29.351162 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" path="/var/lib/kubelet/pods/fef83eca-df2a-4e24-80fe-b8beb1b192c6/volumes" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660058 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.660690 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="extract-utilities" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660709 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="extract-utilities" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.660727 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="extract-content" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660736 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="extract-content" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.660751 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="registry-server" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660760 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="registry-server" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.660772 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="extract-utilities" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660781 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="extract-utilities" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.660799 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="registry-server" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660807 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="registry-server" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.660819 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="extract-content" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660827 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="extract-content" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660984 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="989e8ecd-3950-4494-b9af-911eeeed065c" containerName="registry-server" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.660998 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef83eca-df2a-4e24-80fe-b8beb1b192c6" containerName="registry-server" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.661473 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.661728 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.661883 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.661839 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a" gracePeriod=15 Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.661952 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85" gracePeriod=15 Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.661991 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2" gracePeriod=15 Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.662007 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a" gracePeriod=15 Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.662040 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596" gracePeriod=15 Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.662715 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.662886 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.663001 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.663120 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.663764 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.663870 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.663995 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.664103 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.664232 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.664330 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.664596 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.664726 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665050 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665166 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665273 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665356 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665426 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665507 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.665699 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.665782 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.669388 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 17:20:31 crc kubenswrapper[4823]: E0121 17:20:31.698965 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.772743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.773114 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.773289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.773438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.773607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.773811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.774020 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.774200 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875666 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875732 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875763 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875826 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875722 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875956 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875958 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.875988 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.876017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:31 crc kubenswrapper[4823]: I0121 17:20:31.876050 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.000249 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:32 crc kubenswrapper[4823]: W0121 17:20:32.022737 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6167f120aa92fd8a24ed3b29b93b49a198b4267de68ffa8f03a7fc6f1ddbb926 WatchSource:0}: Error finding container 6167f120aa92fd8a24ed3b29b93b49a198b4267de68ffa8f03a7fc6f1ddbb926: Status 404 returned error can't find the container with id 6167f120aa92fd8a24ed3b29b93b49a198b4267de68ffa8f03a7fc6f1ddbb926 Jan 21 17:20:32 crc kubenswrapper[4823]: E0121 17:20:32.026405 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cceb55edfc42c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 17:20:32.025887788 +0000 UTC m=+232.952018648,LastTimestamp:2026-01-21 17:20:32.025887788 +0000 UTC m=+232.952018648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.780952 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" containerID="af943b61af43c3bec79b7dacbb08be7775c394c53e084de2a5782756125ca25d" exitCode=0 Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.781020 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e34b3d5-6673-42ed-851b-ec7977fe71fc","Type":"ContainerDied","Data":"af943b61af43c3bec79b7dacbb08be7775c394c53e084de2a5782756125ca25d"} Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.781684 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.783641 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.785200 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.787179 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2" exitCode=0 Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.787205 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85" exitCode=0 Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.787216 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a" exitCode=0 Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.787224 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596" exitCode=2 Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.787284 4823 scope.go:117] "RemoveContainer" containerID="5560aad0ca504347a48b048c17c8f0db030b1e4ba37473debf73c8ade3838f76" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.789461 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919"} Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.789502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6167f120aa92fd8a24ed3b29b93b49a198b4267de68ffa8f03a7fc6f1ddbb926"} Jan 21 17:20:32 crc kubenswrapper[4823]: E0121 17:20:32.790447 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:20:32 crc kubenswrapper[4823]: I0121 17:20:32.790549 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:33 crc kubenswrapper[4823]: E0121 17:20:33.393287 4823 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" volumeName="registry-storage" Jan 21 17:20:33 crc kubenswrapper[4823]: I0121 17:20:33.798534 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 17:20:33 crc kubenswrapper[4823]: I0121 17:20:33.979286 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 17:20:33 crc kubenswrapper[4823]: I0121 17:20:33.980229 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:33 crc kubenswrapper[4823]: I0121 17:20:33.980836 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:33 crc kubenswrapper[4823]: I0121 17:20:33.981274 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.080671 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.081554 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.082174 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.101610 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.101674 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.101740 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.101762 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.101799 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.101883 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.102166 4823 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.102195 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.102214 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203205 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kube-api-access\") pod \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203266 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kubelet-dir\") pod \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203341 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-var-lock\") pod \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\" (UID: \"0e34b3d5-6673-42ed-851b-ec7977fe71fc\") " Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203454 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0e34b3d5-6673-42ed-851b-ec7977fe71fc" (UID: "0e34b3d5-6673-42ed-851b-ec7977fe71fc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203542 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-var-lock" (OuterVolumeSpecName: "var-lock") pod "0e34b3d5-6673-42ed-851b-ec7977fe71fc" (UID: "0e34b3d5-6673-42ed-851b-ec7977fe71fc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203828 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.203868 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e34b3d5-6673-42ed-851b-ec7977fe71fc-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.208241 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0e34b3d5-6673-42ed-851b-ec7977fe71fc" (UID: "0e34b3d5-6673-42ed-851b-ec7977fe71fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.238298 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.238786 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.239057 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.239251 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.239445 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.239471 4823 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.239708 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.305091 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e34b3d5-6673-42ed-851b-ec7977fe71fc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.441494 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.812550 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.813406 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a" exitCode=0 Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.813460 4823 scope.go:117] "RemoveContainer" containerID="820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.813592 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.826075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e34b3d5-6673-42ed-851b-ec7977fe71fc","Type":"ContainerDied","Data":"c7812de7703641be7e750f60fdaa58b0bb7e9d6b406b19656eea08175b26568c"} Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.826116 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7812de7703641be7e750f60fdaa58b0bb7e9d6b406b19656eea08175b26568c" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.826175 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.828097 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.828745 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.838688 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.839135 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.840104 4823 scope.go:117] "RemoveContainer" containerID="37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.841830 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.856565 4823 scope.go:117] "RemoveContainer" containerID="87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.873785 4823 scope.go:117] "RemoveContainer" containerID="8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.890099 4823 scope.go:117] "RemoveContainer" containerID="dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.908161 4823 scope.go:117] "RemoveContainer" containerID="94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.932780 4823 scope.go:117] "RemoveContainer" containerID="820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.933384 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\": container with ID starting with 820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2 not found: ID does not exist" containerID="820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.933436 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2"} err="failed to get container status \"820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\": rpc error: code = NotFound desc = could not find container \"820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2\": container with ID starting with 820b9b6f57e5b78a310390f89fc4a2de849af8d07324fcc7177afbdcf346a1a2 not found: ID does not exist" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.933473 4823 scope.go:117] "RemoveContainer" containerID="37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.934262 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\": container with ID starting with 37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85 not found: ID does not exist" containerID="37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.934294 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85"} err="failed to get container status \"37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\": rpc error: code = NotFound desc = could not find container \"37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85\": container with ID starting with 37b8f8c88660cca81d33619084e858c779295c90a0e230ee21d9694fbba45d85 not found: ID does not exist" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.934312 4823 scope.go:117] "RemoveContainer" containerID="87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.934725 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\": container with ID starting with 87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a not found: ID does not exist" containerID="87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.934784 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a"} err="failed to get container status \"87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\": rpc error: code = NotFound desc = could not find container \"87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a\": container with ID starting with 87e58488143a5bb1f3fe8f7fd7a9a2f2b3e11611699ad40e4ecd81f38fcb541a not found: ID does not exist" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.934818 4823 scope.go:117] "RemoveContainer" containerID="8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.935354 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\": container with ID starting with 8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596 not found: ID does not exist" containerID="8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.935396 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596"} err="failed to get container status \"8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\": rpc error: code = NotFound desc = could not find container \"8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596\": container with ID starting with 8ff6657e0d6a1bef62e03d1df1b9fb9b63338111df33de4b72479ceb67be9596 not found: ID does not exist" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.935425 4823 scope.go:117] "RemoveContainer" containerID="dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.935887 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\": container with ID starting with dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a not found: ID does not exist" containerID="dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.935917 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a"} err="failed to get container status \"dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\": rpc error: code = NotFound desc = could not find container \"dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a\": container with ID starting with dfe5db714f6c62647b4d9d762c9aedb91d54eecc0134e27f747111f977c85a7a not found: ID does not exist" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.935936 4823 scope.go:117] "RemoveContainer" containerID="94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0" Jan 21 17:20:34 crc kubenswrapper[4823]: E0121 17:20:34.936201 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\": container with ID starting with 94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0 not found: ID does not exist" containerID="94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0" Jan 21 17:20:34 crc kubenswrapper[4823]: I0121 17:20:34.936223 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0"} err="failed to get container status \"94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\": rpc error: code = NotFound desc = could not find container \"94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0\": container with ID starting with 94e1cadba081bd75cfcfca883e5e429014dd7dc44552a4543c742f183e82f1d0 not found: ID does not exist" Jan 21 17:20:35 crc kubenswrapper[4823]: I0121 17:20:35.350039 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 17:20:35 crc kubenswrapper[4823]: E0121 17:20:35.643057 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Jan 21 17:20:35 crc kubenswrapper[4823]: E0121 17:20:35.784177 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cceb55edfc42c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 17:20:32.025887788 +0000 UTC m=+232.952018648,LastTimestamp:2026-01-21 17:20:32.025887788 +0000 UTC m=+232.952018648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 17:20:37 crc kubenswrapper[4823]: E0121 17:20:37.245106 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Jan 21 17:20:39 crc kubenswrapper[4823]: I0121 17:20:39.348454 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:40 crc kubenswrapper[4823]: E0121 17:20:40.445935 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="6.4s" Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.456146 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.456919 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 17:20:45 crc kubenswrapper[4823]: E0121 17:20:45.785849 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cceb55edfc42c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 17:20:32.025887788 +0000 UTC m=+232.952018648,LastTimestamp:2026-01-21 17:20:32.025887788 +0000 UTC m=+232.952018648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.884606 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.884689 4823 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115" exitCode=1 Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.884750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115"} Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.885771 4823 scope.go:117] "RemoveContainer" containerID="c1e769c94fb95d2571c6fe5ac6b051e29d319546ef33b3600bb6fb7f1494d115" Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.886142 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:45 crc kubenswrapper[4823]: I0121 17:20:45.886822 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:46 crc kubenswrapper[4823]: E0121 17:20:46.847970 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="7s" Jan 21 17:20:46 crc kubenswrapper[4823]: I0121 17:20:46.897400 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 17:20:46 crc kubenswrapper[4823]: I0121 17:20:46.897466 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad8e93d1f757cbb2ffab63c679c5ff2aa7bcd940e3562a8efbebe3be10abfa28"} Jan 21 17:20:46 crc kubenswrapper[4823]: I0121 17:20:46.898565 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:46 crc kubenswrapper[4823]: I0121 17:20:46.899154 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.343539 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.344458 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.345291 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.357355 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.357416 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:47 crc kubenswrapper[4823]: E0121 17:20:47.357999 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.358546 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:47 crc kubenswrapper[4823]: W0121 17:20:47.379805 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3e5f67515cd377f248952cce092e4151b11505dbe296fb2ff7e577282bda6f8b WatchSource:0}: Error finding container 3e5f67515cd377f248952cce092e4151b11505dbe296fb2ff7e577282bda6f8b: Status 404 returned error can't find the container with id 3e5f67515cd377f248952cce092e4151b11505dbe296fb2ff7e577282bda6f8b Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.904542 4823 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="911598fdacc66551ba16ce85f3d9e8b8db943c8fd261f739c0b1ed85619b5d2a" exitCode=0 Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.904610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"911598fdacc66551ba16ce85f3d9e8b8db943c8fd261f739c0b1ed85619b5d2a"} Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.904655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e5f67515cd377f248952cce092e4151b11505dbe296fb2ff7e577282bda6f8b"} Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.905101 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.905123 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.905649 4823 status_manager.go:851] "Failed to get status for pod" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:47 crc kubenswrapper[4823]: E0121 17:20:47.905793 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:47 crc kubenswrapper[4823]: I0121 17:20:47.906132 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 17:20:48 crc kubenswrapper[4823]: I0121 17:20:48.913168 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7a779d46c6f55d614510fce22c1fd24282ccb7d943ec15f03adcda42da11507c"} Jan 21 17:20:48 crc kubenswrapper[4823]: I0121 17:20:48.913380 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2ea4bc3e014272fba70f3795607a96daaf9bf128cd14f565aea22aaa2191ac1d"} Jan 21 17:20:48 crc kubenswrapper[4823]: I0121 17:20:48.913402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a73bd3665aae279a4946bd848b11d31c0c3cdf8d1c8dba89fb527e5188d0eded"} Jan 21 17:20:48 crc kubenswrapper[4823]: I0121 17:20:48.913410 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5354f9c581d804ad969575bff59b32c75ed2c6ed86f537b0950ddf4b02795afe"} Jan 21 17:20:49 crc kubenswrapper[4823]: I0121 17:20:49.921531 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6916a8697d5e26353ff90c37718effb32c274734b4892d25a3b5cce80fcf8f34"} Jan 21 17:20:49 crc kubenswrapper[4823]: I0121 17:20:49.922391 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:49 crc kubenswrapper[4823]: I0121 17:20:49.922427 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:49 crc kubenswrapper[4823]: I0121 17:20:49.923026 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:51 crc kubenswrapper[4823]: I0121 17:20:51.634992 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:20:52 crc kubenswrapper[4823]: I0121 17:20:52.359703 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:52 crc kubenswrapper[4823]: I0121 17:20:52.359747 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:52 crc kubenswrapper[4823]: I0121 17:20:52.363832 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:53 crc kubenswrapper[4823]: I0121 17:20:53.358576 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:20:53 crc kubenswrapper[4823]: I0121 17:20:53.360198 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 17:20:53 crc kubenswrapper[4823]: I0121 17:20:53.360355 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 17:20:54 crc kubenswrapper[4823]: I0121 17:20:54.932555 4823 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:55 crc kubenswrapper[4823]: I0121 17:20:55.953668 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:55 crc kubenswrapper[4823]: I0121 17:20:55.953700 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:55 crc kubenswrapper[4823]: I0121 17:20:55.958668 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:20:55 crc kubenswrapper[4823]: I0121 17:20:55.961768 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd6719d-78b5-49a5-9b3f-c0ce99a0de1e" Jan 21 17:20:56 crc kubenswrapper[4823]: I0121 17:20:56.959015 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:56 crc kubenswrapper[4823]: I0121 17:20:56.959060 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:20:59 crc kubenswrapper[4823]: I0121 17:20:59.361526 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd6719d-78b5-49a5-9b3f-c0ce99a0de1e" Jan 21 17:21:03 crc kubenswrapper[4823]: I0121 17:21:03.359143 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 17:21:03 crc kubenswrapper[4823]: I0121 17:21:03.359756 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 17:21:04 crc kubenswrapper[4823]: I0121 17:21:04.585967 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 17:21:04 crc kubenswrapper[4823]: I0121 17:21:04.871969 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 17:21:04 crc kubenswrapper[4823]: I0121 17:21:04.951926 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 17:21:04 crc kubenswrapper[4823]: I0121 17:21:04.991379 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.196054 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.689198 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.712383 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.832785 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.834628 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.843493 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 17:21:05 crc kubenswrapper[4823]: I0121 17:21:05.964601 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.057179 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.057271 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.179112 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.344430 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.447008 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.509897 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.706728 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.802093 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.821625 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 17:21:06 crc kubenswrapper[4823]: I0121 17:21:06.994890 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.308551 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.418272 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.453814 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.486699 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.488440 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.539917 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.673647 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.692376 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.813716 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.866214 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.942659 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 17:21:07 crc kubenswrapper[4823]: I0121 17:21:07.956842 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.088280 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.151754 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.236478 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.291912 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.350542 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.390005 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.404883 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.464460 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.481801 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.586840 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.606290 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.689196 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.709217 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.723137 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.725451 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.742229 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.742901 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.789091 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.834459 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.855607 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.909089 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.931714 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 17:21:08 crc kubenswrapper[4823]: I0121 17:21:08.989775 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.017697 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.022183 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.076171 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.132063 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.136980 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.167710 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.254702 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.357796 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.417414 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.587746 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.619614 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.629700 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.673844 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.720596 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.735892 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.830489 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.835716 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.917278 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 17:21:09 crc kubenswrapper[4823]: I0121 17:21:09.948345 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.016620 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.093577 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.173329 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.314002 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.421030 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.439142 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.461423 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.490393 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.537704 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.570527 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.592511 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.604721 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.678460 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.694471 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.721389 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.874974 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.944589 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.964143 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 17:21:10 crc kubenswrapper[4823]: I0121 17:21:10.969166 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.106161 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.255863 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.338356 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.372379 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.443198 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.471381 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.482254 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.741575 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.775168 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.804125 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.825491 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.835508 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.931559 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.936597 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 17:21:11 crc kubenswrapper[4823]: I0121 17:21:11.965763 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.044642 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.102398 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.133084 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.177512 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.190938 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.222869 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.302038 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.392658 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.395920 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.438381 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.455018 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.524520 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.549322 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.604419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.638936 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.685123 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.727882 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.798176 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 17:21:12 crc kubenswrapper[4823]: I0121 17:21:12.836070 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.018327 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.055183 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.192231 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.210045 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.230244 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.230356 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.316497 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.352631 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.363724 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.369260 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.369779 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.417328 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.420462 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.452228 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.471277 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.494805 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.587997 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.613368 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.698759 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.729371 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.790575 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.820035 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.865609 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 17:21:13 crc kubenswrapper[4823]: I0121 17:21:13.958327 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.006506 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.058945 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.171010 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.314316 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.337602 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.392327 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.448783 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.511441 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.645520 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.668748 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.702255 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.720563 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.756580 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.757611 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.769494 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.798307 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.800302 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.807992 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.813613 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 17:21:14 crc kubenswrapper[4823]: I0121 17:21:14.998406 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.054984 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.169955 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.184539 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.289259 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.344077 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.384603 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.395945 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.443147 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.525621 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.575723 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.587788 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.588285 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.671246 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.713624 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.736041 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.847339 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.877811 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.944007 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.944196 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.949103 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 17:21:15 crc kubenswrapper[4823]: I0121 17:21:15.984877 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.071173 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.096260 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.099687 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.148318 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.211530 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.355578 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.488751 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.513592 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.565271 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.580605 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.720788 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.760600 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.954054 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.971928 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.986139 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 17:21:16 crc kubenswrapper[4823]: I0121 17:21:16.986994 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.038763 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.093957 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.135738 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.162837 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.198765 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.200943 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.245059 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.430153 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.511923 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.574633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.647662 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.733657 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.859367 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.898833 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.958585 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 17:21:17 crc kubenswrapper[4823]: I0121 17:21:17.958934 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.014994 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.055479 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.067348 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.134325 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.181910 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.195659 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.345481 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.403389 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.467781 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.540005 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.643984 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 17:21:18 crc kubenswrapper[4823]: I0121 17:21:18.732090 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 17:21:19 crc kubenswrapper[4823]: I0121 17:21:19.004884 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 17:21:19 crc kubenswrapper[4823]: I0121 17:21:19.400516 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 17:21:19 crc kubenswrapper[4823]: I0121 17:21:19.515485 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 17:21:19 crc kubenswrapper[4823]: I0121 17:21:19.520137 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 17:21:19 crc kubenswrapper[4823]: I0121 17:21:19.585423 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 17:21:19 crc kubenswrapper[4823]: I0121 17:21:19.696522 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.034735 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.053966 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.059175 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.059248 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.059269 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pg9m","openshift-marketplace/redhat-operators-tkfhr","openshift-marketplace/redhat-marketplace-5gw98","openshift-marketplace/community-operators-pqnps","openshift-marketplace/marketplace-operator-79b997595-vw29d"] Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.059827 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.059896 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5153e76f-0977-41d2-a733-738cd41c36f0" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.060040 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5gw98" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="registry-server" containerID="cri-o://4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204" gracePeriod=30 Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.060217 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pg9m" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="registry-server" containerID="cri-o://8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe" gracePeriod=30 Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.060444 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tkfhr" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="registry-server" containerID="cri-o://559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e" gracePeriod=30 Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.060360 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pqnps" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="registry-server" containerID="cri-o://807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f" gracePeriod=30 Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.060530 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" podUID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" containerName="marketplace-operator" containerID="cri-o://35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291" gracePeriod=30 Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.072121 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.107173 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.107151509 podStartE2EDuration="26.107151509s" podCreationTimestamp="2026-01-21 17:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:21:20.089000617 +0000 UTC m=+281.015131507" watchObservedRunningTime="2026-01-21 17:21:20.107151509 +0000 UTC m=+281.033282389" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.317727 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.554618 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.558014 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.562678 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.567986 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.574889 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690049 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-trusted-ca\") pod \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690093 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbmvf\" (UniqueName: \"kubernetes.io/projected/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-kube-api-access-hbmvf\") pod \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690119 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbrfh\" (UniqueName: \"kubernetes.io/projected/035531d0-ecfd-4d31-be47-08fc49762b7e-kube-api-access-pbrfh\") pod \"035531d0-ecfd-4d31-be47-08fc49762b7e\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690149 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-catalog-content\") pod \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690178 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-catalog-content\") pod \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690219 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psdjz\" (UniqueName: \"kubernetes.io/projected/b3361a52-0a28-4be5-8216-a324cbba0c60-kube-api-access-psdjz\") pod \"b3361a52-0a28-4be5-8216-a324cbba0c60\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690246 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-operator-metrics\") pod \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-utilities\") pod \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690287 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-utilities\") pod \"035531d0-ecfd-4d31-be47-08fc49762b7e\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690304 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-utilities\") pod \"b3361a52-0a28-4be5-8216-a324cbba0c60\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-utilities\") pod \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\" (UID: \"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690350 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wc44\" (UniqueName: \"kubernetes.io/projected/2e548be0-d2ab-4bad-a18c-cbe203dbb314-kube-api-access-9wc44\") pod \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\" (UID: \"2e548be0-d2ab-4bad-a18c-cbe203dbb314\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690371 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-catalog-content\") pod \"035531d0-ecfd-4d31-be47-08fc49762b7e\" (UID: \"035531d0-ecfd-4d31-be47-08fc49762b7e\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690399 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95pzg\" (UniqueName: \"kubernetes.io/projected/8edc9d3c-22fb-492b-8f1a-c488667e0df0-kube-api-access-95pzg\") pod \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\" (UID: \"8edc9d3c-22fb-492b-8f1a-c488667e0df0\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.690416 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-catalog-content\") pod \"b3361a52-0a28-4be5-8216-a324cbba0c60\" (UID: \"b3361a52-0a28-4be5-8216-a324cbba0c60\") " Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.691782 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-utilities" (OuterVolumeSpecName: "utilities") pod "035531d0-ecfd-4d31-be47-08fc49762b7e" (UID: "035531d0-ecfd-4d31-be47-08fc49762b7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.691902 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-utilities" (OuterVolumeSpecName: "utilities") pod "b3361a52-0a28-4be5-8216-a324cbba0c60" (UID: "b3361a52-0a28-4be5-8216-a324cbba0c60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.691950 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-utilities" (OuterVolumeSpecName: "utilities") pod "dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" (UID: "dfcdf50c-881a-40d0-aa3b-cac833d1d9e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.692180 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-utilities" (OuterVolumeSpecName: "utilities") pod "8edc9d3c-22fb-492b-8f1a-c488667e0df0" (UID: "8edc9d3c-22fb-492b-8f1a-c488667e0df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.692473 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2e548be0-d2ab-4bad-a18c-cbe203dbb314" (UID: "2e548be0-d2ab-4bad-a18c-cbe203dbb314"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.696793 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3361a52-0a28-4be5-8216-a324cbba0c60-kube-api-access-psdjz" (OuterVolumeSpecName: "kube-api-access-psdjz") pod "b3361a52-0a28-4be5-8216-a324cbba0c60" (UID: "b3361a52-0a28-4be5-8216-a324cbba0c60"). InnerVolumeSpecName "kube-api-access-psdjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.696838 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035531d0-ecfd-4d31-be47-08fc49762b7e-kube-api-access-pbrfh" (OuterVolumeSpecName: "kube-api-access-pbrfh") pod "035531d0-ecfd-4d31-be47-08fc49762b7e" (UID: "035531d0-ecfd-4d31-be47-08fc49762b7e"). InnerVolumeSpecName "kube-api-access-pbrfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.696979 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8edc9d3c-22fb-492b-8f1a-c488667e0df0-kube-api-access-95pzg" (OuterVolumeSpecName: "kube-api-access-95pzg") pod "8edc9d3c-22fb-492b-8f1a-c488667e0df0" (UID: "8edc9d3c-22fb-492b-8f1a-c488667e0df0"). InnerVolumeSpecName "kube-api-access-95pzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.697160 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-kube-api-access-hbmvf" (OuterVolumeSpecName: "kube-api-access-hbmvf") pod "dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" (UID: "dfcdf50c-881a-40d0-aa3b-cac833d1d9e6"). InnerVolumeSpecName "kube-api-access-hbmvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.698011 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2e548be0-d2ab-4bad-a18c-cbe203dbb314" (UID: "2e548be0-d2ab-4bad-a18c-cbe203dbb314"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.698564 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e548be0-d2ab-4bad-a18c-cbe203dbb314-kube-api-access-9wc44" (OuterVolumeSpecName: "kube-api-access-9wc44") pod "2e548be0-d2ab-4bad-a18c-cbe203dbb314" (UID: "2e548be0-d2ab-4bad-a18c-cbe203dbb314"). InnerVolumeSpecName "kube-api-access-9wc44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.734051 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3361a52-0a28-4be5-8216-a324cbba0c60" (UID: "b3361a52-0a28-4be5-8216-a324cbba0c60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.749205 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "035531d0-ecfd-4d31-be47-08fc49762b7e" (UID: "035531d0-ecfd-4d31-be47-08fc49762b7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.758954 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.759606 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8edc9d3c-22fb-492b-8f1a-c488667e0df0" (UID: "8edc9d3c-22fb-492b-8f1a-c488667e0df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791756 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95pzg\" (UniqueName: \"kubernetes.io/projected/8edc9d3c-22fb-492b-8f1a-c488667e0df0-kube-api-access-95pzg\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791788 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791800 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791809 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbmvf\" (UniqueName: \"kubernetes.io/projected/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-kube-api-access-hbmvf\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791817 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbrfh\" (UniqueName: \"kubernetes.io/projected/035531d0-ecfd-4d31-be47-08fc49762b7e-kube-api-access-pbrfh\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791825 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791833 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psdjz\" (UniqueName: \"kubernetes.io/projected/b3361a52-0a28-4be5-8216-a324cbba0c60-kube-api-access-psdjz\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791841 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e548be0-d2ab-4bad-a18c-cbe203dbb314-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791865 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8edc9d3c-22fb-492b-8f1a-c488667e0df0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791874 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791882 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3361a52-0a28-4be5-8216-a324cbba0c60-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791891 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791899 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wc44\" (UniqueName: \"kubernetes.io/projected/2e548be0-d2ab-4bad-a18c-cbe203dbb314-kube-api-access-9wc44\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.791907 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035531d0-ecfd-4d31-be47-08fc49762b7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.820293 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" (UID: "dfcdf50c-881a-40d0-aa3b-cac833d1d9e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:20 crc kubenswrapper[4823]: I0121 17:21:20.893558 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.102084 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerID="559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e" exitCode=0 Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.102223 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkfhr" event={"ID":"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6","Type":"ContainerDied","Data":"559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.102307 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkfhr" event={"ID":"dfcdf50c-881a-40d0-aa3b-cac833d1d9e6","Type":"ContainerDied","Data":"897b2fcdeaf147bc169e3deaf1e8b03a381f2a9bca6edd1df0409233eecfa80b"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.102332 4823 scope.go:117] "RemoveContainer" containerID="559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.102360 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkfhr" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.106054 4823 generic.go:334] "Generic (PLEG): container finished" podID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerID="4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204" exitCode=0 Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.106143 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gw98" event={"ID":"b3361a52-0a28-4be5-8216-a324cbba0c60","Type":"ContainerDied","Data":"4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.106197 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gw98" event={"ID":"b3361a52-0a28-4be5-8216-a324cbba0c60","Type":"ContainerDied","Data":"daad94bc06fa9bd01e3ef6fd61b5a0b2a2bbfadb4ee5a4d7de3a53c29262d620"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.106298 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gw98" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.109622 4823 generic.go:334] "Generic (PLEG): container finished" podID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerID="807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f" exitCode=0 Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.109721 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerDied","Data":"807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.109788 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqnps" event={"ID":"8edc9d3c-22fb-492b-8f1a-c488667e0df0","Type":"ContainerDied","Data":"2e116cece061da101789abab2ef2c579ca32918017ee51583762f604d75014e7"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.109738 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqnps" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.112014 4823 generic.go:334] "Generic (PLEG): container finished" podID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" containerID="35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291" exitCode=0 Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.112096 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.112118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" event={"ID":"2e548be0-d2ab-4bad-a18c-cbe203dbb314","Type":"ContainerDied","Data":"35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.112161 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vw29d" event={"ID":"2e548be0-d2ab-4bad-a18c-cbe203dbb314","Type":"ContainerDied","Data":"9eea56627cf1529f1c399653f06549a40da4be6abf8570ebed23566e72d324dc"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.115339 4823 generic.go:334] "Generic (PLEG): container finished" podID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerID="8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe" exitCode=0 Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.115522 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pg9m" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.115589 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pg9m" event={"ID":"035531d0-ecfd-4d31-be47-08fc49762b7e","Type":"ContainerDied","Data":"8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.115642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pg9m" event={"ID":"035531d0-ecfd-4d31-be47-08fc49762b7e","Type":"ContainerDied","Data":"4cc62104a28be409eeb2979c75f6fc2e43574b7363ea00829aea02fdf5f08bb1"} Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.122934 4823 scope.go:117] "RemoveContainer" containerID="609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.145715 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gw98"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.147465 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gw98"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.153427 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkfhr"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.156303 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tkfhr"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.160178 4823 scope.go:117] "RemoveContainer" containerID="2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.167972 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vw29d"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.170834 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vw29d"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.181693 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqnps"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.190123 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pqnps"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.194270 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pg9m"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.198201 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pg9m"] Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.214769 4823 scope.go:117] "RemoveContainer" containerID="559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.215303 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e\": container with ID starting with 559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e not found: ID does not exist" containerID="559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.215453 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e"} err="failed to get container status \"559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e\": rpc error: code = NotFound desc = could not find container \"559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e\": container with ID starting with 559282d3d07fedb9bb7af43b124cc70abdaa770b2c701790502719a1fdc5dd1e not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.215579 4823 scope.go:117] "RemoveContainer" containerID="609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.216033 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd\": container with ID starting with 609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd not found: ID does not exist" containerID="609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.216107 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd"} err="failed to get container status \"609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd\": rpc error: code = NotFound desc = could not find container \"609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd\": container with ID starting with 609ae763f8a75592cbb465c4b9ef893bd7ba88151201c8c68bfd81c46026c6fd not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.216136 4823 scope.go:117] "RemoveContainer" containerID="2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.216922 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6\": container with ID starting with 2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6 not found: ID does not exist" containerID="2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.217099 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6"} err="failed to get container status \"2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6\": rpc error: code = NotFound desc = could not find container \"2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6\": container with ID starting with 2242713b19a9f940f41e3642db7d360af3aae411971889f1b80ad54f71acd6f6 not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.217245 4823 scope.go:117] "RemoveContainer" containerID="4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.232377 4823 scope.go:117] "RemoveContainer" containerID="855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.247563 4823 scope.go:117] "RemoveContainer" containerID="0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.267545 4823 scope.go:117] "RemoveContainer" containerID="4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.267975 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204\": container with ID starting with 4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204 not found: ID does not exist" containerID="4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.268014 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204"} err="failed to get container status \"4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204\": rpc error: code = NotFound desc = could not find container \"4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204\": container with ID starting with 4db1441a41b4b09657ff0d48f880a1838bc7a755fa05f9eacba2689057b79204 not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.268043 4823 scope.go:117] "RemoveContainer" containerID="855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.268396 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109\": container with ID starting with 855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109 not found: ID does not exist" containerID="855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.268499 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109"} err="failed to get container status \"855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109\": rpc error: code = NotFound desc = could not find container \"855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109\": container with ID starting with 855ea7f953e32859055117b468b980b3974f35ad45870b6d60912eadac282109 not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.268589 4823 scope.go:117] "RemoveContainer" containerID="0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.269153 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1\": container with ID starting with 0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1 not found: ID does not exist" containerID="0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.269239 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1"} err="failed to get container status \"0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1\": rpc error: code = NotFound desc = could not find container \"0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1\": container with ID starting with 0b7175ee03ea7be706a737107842b7358d50de29ab1193ca724f453f1c2532c1 not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.269324 4823 scope.go:117] "RemoveContainer" containerID="807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.285286 4823 scope.go:117] "RemoveContainer" containerID="e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.300151 4823 scope.go:117] "RemoveContainer" containerID="7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.313115 4823 scope.go:117] "RemoveContainer" containerID="807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.313594 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f\": container with ID starting with 807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f not found: ID does not exist" containerID="807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.313666 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f"} err="failed to get container status \"807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f\": rpc error: code = NotFound desc = could not find container \"807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f\": container with ID starting with 807b3a2682b37aa1dc22a96110c194d71611710dd12dd56d7e7a15999b60010f not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.313719 4823 scope.go:117] "RemoveContainer" containerID="e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.314109 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d\": container with ID starting with e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d not found: ID does not exist" containerID="e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.314217 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d"} err="failed to get container status \"e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d\": rpc error: code = NotFound desc = could not find container \"e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d\": container with ID starting with e4e8ff553e942e23240faae319350b7eecdc6e59bce0368d3cafe4a6b3ec926d not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.314317 4823 scope.go:117] "RemoveContainer" containerID="7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.314631 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a\": container with ID starting with 7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a not found: ID does not exist" containerID="7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.314729 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a"} err="failed to get container status \"7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a\": rpc error: code = NotFound desc = could not find container \"7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a\": container with ID starting with 7aea86db9f643ff91b72106656296c702507d17ee2b2c9cb9ea92efce764617a not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.314828 4823 scope.go:117] "RemoveContainer" containerID="35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.326671 4823 scope.go:117] "RemoveContainer" containerID="35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.327106 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291\": container with ID starting with 35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291 not found: ID does not exist" containerID="35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.327198 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291"} err="failed to get container status \"35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291\": rpc error: code = NotFound desc = could not find container \"35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291\": container with ID starting with 35333eaa214b1f48826bb7565743ab693b01b1d18fd9351d7b26ff72eb1d9291 not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.327279 4823 scope.go:117] "RemoveContainer" containerID="8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.338893 4823 scope.go:117] "RemoveContainer" containerID="0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.351215 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" path="/var/lib/kubelet/pods/035531d0-ecfd-4d31-be47-08fc49762b7e/volumes" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.352197 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" path="/var/lib/kubelet/pods/2e548be0-d2ab-4bad-a18c-cbe203dbb314/volumes" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.352892 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" path="/var/lib/kubelet/pods/8edc9d3c-22fb-492b-8f1a-c488667e0df0/volumes" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.354312 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" path="/var/lib/kubelet/pods/b3361a52-0a28-4be5-8216-a324cbba0c60/volumes" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.355057 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" path="/var/lib/kubelet/pods/dfcdf50c-881a-40d0-aa3b-cac833d1d9e6/volumes" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.356837 4823 scope.go:117] "RemoveContainer" containerID="484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.370789 4823 scope.go:117] "RemoveContainer" containerID="8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.371356 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe\": container with ID starting with 8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe not found: ID does not exist" containerID="8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.371391 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe"} err="failed to get container status \"8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe\": rpc error: code = NotFound desc = could not find container \"8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe\": container with ID starting with 8c3c54beb9b8e20de185f9d4f7a2b1e339284e2adb94ad02d4fff231adaa3cbe not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.371434 4823 scope.go:117] "RemoveContainer" containerID="0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.371797 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d\": container with ID starting with 0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d not found: ID does not exist" containerID="0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.371826 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d"} err="failed to get container status \"0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d\": rpc error: code = NotFound desc = could not find container \"0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d\": container with ID starting with 0705259c5e46178c680af378e39c541042da8eb2bf60eef5310f17804f30f40d not found: ID does not exist" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.371872 4823 scope.go:117] "RemoveContainer" containerID="484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0" Jan 21 17:21:21 crc kubenswrapper[4823]: E0121 17:21:21.372362 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0\": container with ID starting with 484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0 not found: ID does not exist" containerID="484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0" Jan 21 17:21:21 crc kubenswrapper[4823]: I0121 17:21:21.372407 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0"} err="failed to get container status \"484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0\": rpc error: code = NotFound desc = could not find container \"484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0\": container with ID starting with 484b292461620f957cf97d26cdaa5cc2d420a725dcbf036ae983b13ff94e7cb0 not found: ID does not exist" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.197567 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x57gs"] Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198091 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198107 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198120 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198128 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198138 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" containerName="marketplace-operator" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198146 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" containerName="marketplace-operator" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198155 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198163 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198175 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198182 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198193 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198201 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198209 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198216 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198224 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198231 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198240 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198247 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198256 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198263 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198275 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198282 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="extract-utilities" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198291 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198298 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198309 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" containerName="installer" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198316 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" containerName="installer" Jan 21 17:21:25 crc kubenswrapper[4823]: E0121 17:21:25.198324 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198332 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="extract-content" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198450 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcdf50c-881a-40d0-aa3b-cac833d1d9e6" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198464 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3361a52-0a28-4be5-8216-a324cbba0c60" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198473 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e548be0-d2ab-4bad-a18c-cbe203dbb314" containerName="marketplace-operator" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198486 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e34b3d5-6673-42ed-851b-ec7977fe71fc" containerName="installer" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198499 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8edc9d3c-22fb-492b-8f1a-c488667e0df0" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198508 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="035531d0-ecfd-4d31-be47-08fc49762b7e" containerName="registry-server" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.198941 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.201572 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.205910 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.206091 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.208975 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.209746 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.252797 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x57gs"] Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.349889 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a5e0f22-8962-482f-9848-31cc195390ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.349939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a5e0f22-8962-482f-9848-31cc195390ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.350060 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p76h\" (UniqueName: \"kubernetes.io/projected/4a5e0f22-8962-482f-9848-31cc195390ca-kube-api-access-4p76h\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.450868 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a5e0f22-8962-482f-9848-31cc195390ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.450920 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a5e0f22-8962-482f-9848-31cc195390ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.450947 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p76h\" (UniqueName: \"kubernetes.io/projected/4a5e0f22-8962-482f-9848-31cc195390ca-kube-api-access-4p76h\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.453572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a5e0f22-8962-482f-9848-31cc195390ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.459746 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a5e0f22-8962-482f-9848-31cc195390ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.471848 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p76h\" (UniqueName: \"kubernetes.io/projected/4a5e0f22-8962-482f-9848-31cc195390ca-kube-api-access-4p76h\") pod \"marketplace-operator-79b997595-x57gs\" (UID: \"4a5e0f22-8962-482f-9848-31cc195390ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.513320 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:25 crc kubenswrapper[4823]: I0121 17:21:25.724786 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x57gs"] Jan 21 17:21:26 crc kubenswrapper[4823]: I0121 17:21:26.144674 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" event={"ID":"4a5e0f22-8962-482f-9848-31cc195390ca","Type":"ContainerStarted","Data":"e6bf55a93f087ee388659c7b406df7255e0d5aed42942815eae8f4efbc6e4e5d"} Jan 21 17:21:26 crc kubenswrapper[4823]: I0121 17:21:26.144728 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" event={"ID":"4a5e0f22-8962-482f-9848-31cc195390ca","Type":"ContainerStarted","Data":"7de086b0df71aaeb8b23190750139e3006404ff73f5976ecb53d8795631161ea"} Jan 21 17:21:26 crc kubenswrapper[4823]: I0121 17:21:26.146072 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:26 crc kubenswrapper[4823]: I0121 17:21:26.150081 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" Jan 21 17:21:26 crc kubenswrapper[4823]: I0121 17:21:26.161403 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x57gs" podStartSLOduration=1.161383402 podStartE2EDuration="1.161383402s" podCreationTimestamp="2026-01-21 17:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:21:26.15768635 +0000 UTC m=+287.083817210" watchObservedRunningTime="2026-01-21 17:21:26.161383402 +0000 UTC m=+287.087514262" Jan 21 17:21:28 crc kubenswrapper[4823]: I0121 17:21:28.525076 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 17:21:28 crc kubenswrapper[4823]: I0121 17:21:28.525556 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919" gracePeriod=5 Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.116668 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.117141 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.187659 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.187713 4823 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919" exitCode=137 Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.187769 4823 scope.go:117] "RemoveContainer" containerID="1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.187813 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.206251 4823 scope.go:117] "RemoveContainer" containerID="1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919" Jan 21 17:21:34 crc kubenswrapper[4823]: E0121 17:21:34.206662 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919\": container with ID starting with 1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919 not found: ID does not exist" containerID="1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.206726 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919"} err="failed to get container status \"1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919\": rpc error: code = NotFound desc = could not find container \"1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919\": container with ID starting with 1ce67e53b8c1ef267911c46dc713243312998746d8a1be0116900c6c67346919 not found: ID does not exist" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257458 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257554 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257631 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257697 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257771 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.257803 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.258168 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.258215 4823 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.258226 4823 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.258235 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.271042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:21:34 crc kubenswrapper[4823]: I0121 17:21:34.359126 4823 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:35 crc kubenswrapper[4823]: I0121 17:21:35.353216 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 17:21:37 crc kubenswrapper[4823]: I0121 17:21:37.713998 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 17:21:38 crc kubenswrapper[4823]: I0121 17:21:38.067127 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 17:21:39 crc kubenswrapper[4823]: I0121 17:21:39.206476 4823 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.019790 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qxt5l"] Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.020042 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" podUID="e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" containerName="controller-manager" containerID="cri-o://f8994a7e04ae7d47874e0f19d7d148f231b47a859a38b70af6d78359324a0508" gracePeriod=30 Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.119143 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz"] Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.119670 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" podUID="c9647bb1-3272-4e92-8b16-ff16a90dfa8d" containerName="route-controller-manager" containerID="cri-o://855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468" gracePeriod=30 Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.220772 4823 generic.go:334] "Generic (PLEG): container finished" podID="e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" containerID="f8994a7e04ae7d47874e0f19d7d148f231b47a859a38b70af6d78359324a0508" exitCode=0 Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.220823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" event={"ID":"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620","Type":"ContainerDied","Data":"f8994a7e04ae7d47874e0f19d7d148f231b47a859a38b70af6d78359324a0508"} Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.408121 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.514521 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.535505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7psph\" (UniqueName: \"kubernetes.io/projected/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-kube-api-access-7psph\") pod \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.535573 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-proxy-ca-bundles\") pod \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.535597 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-client-ca\") pod \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.535663 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-serving-cert\") pod \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.535693 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-config\") pod \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\" (UID: \"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.536467 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-config" (OuterVolumeSpecName: "config") pod "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" (UID: "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.536993 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" (UID: "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.540675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-kube-api-access-7psph" (OuterVolumeSpecName: "kube-api-access-7psph") pod "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" (UID: "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620"). InnerVolumeSpecName "kube-api-access-7psph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.543048 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-client-ca" (OuterVolumeSpecName: "client-ca") pod "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" (UID: "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.546099 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" (UID: "e8ccfed5-2aa0-4c5b-a7cc-9714c379d620"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.636822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-serving-cert\") pod \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637262 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-config\") pod \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-client-ca\") pod \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637359 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td6x9\" (UniqueName: \"kubernetes.io/projected/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-kube-api-access-td6x9\") pod \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\" (UID: \"c9647bb1-3272-4e92-8b16-ff16a90dfa8d\") " Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637573 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637586 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7psph\" (UniqueName: \"kubernetes.io/projected/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-kube-api-access-7psph\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637597 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637607 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637615 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637844 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-config" (OuterVolumeSpecName: "config") pod "c9647bb1-3272-4e92-8b16-ff16a90dfa8d" (UID: "c9647bb1-3272-4e92-8b16-ff16a90dfa8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.637970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9647bb1-3272-4e92-8b16-ff16a90dfa8d" (UID: "c9647bb1-3272-4e92-8b16-ff16a90dfa8d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.640714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9647bb1-3272-4e92-8b16-ff16a90dfa8d" (UID: "c9647bb1-3272-4e92-8b16-ff16a90dfa8d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.640970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-kube-api-access-td6x9" (OuterVolumeSpecName: "kube-api-access-td6x9") pod "c9647bb1-3272-4e92-8b16-ff16a90dfa8d" (UID: "c9647bb1-3272-4e92-8b16-ff16a90dfa8d"). InnerVolumeSpecName "kube-api-access-td6x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.738808 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td6x9\" (UniqueName: \"kubernetes.io/projected/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-kube-api-access-td6x9\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.739057 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.739071 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:40 crc kubenswrapper[4823]: I0121 17:21:40.739083 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9647bb1-3272-4e92-8b16-ff16a90dfa8d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.230789 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.237916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qxt5l" event={"ID":"e8ccfed5-2aa0-4c5b-a7cc-9714c379d620","Type":"ContainerDied","Data":"207f283f7ec34ffb6dfb68141ef977a211644b469bb7e92082cdeca935b7bc61"} Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.237959 4823 scope.go:117] "RemoveContainer" containerID="f8994a7e04ae7d47874e0f19d7d148f231b47a859a38b70af6d78359324a0508" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.239817 4823 generic.go:334] "Generic (PLEG): container finished" podID="c9647bb1-3272-4e92-8b16-ff16a90dfa8d" containerID="855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468" exitCode=0 Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.239882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" event={"ID":"c9647bb1-3272-4e92-8b16-ff16a90dfa8d","Type":"ContainerDied","Data":"855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468"} Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.239912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" event={"ID":"c9647bb1-3272-4e92-8b16-ff16a90dfa8d","Type":"ContainerDied","Data":"c2e19ef6f3bce0cf76c189d8bfef6d139a2c9d4f889c872e7da11743cfcdc749"} Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.240001 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.260461 4823 scope.go:117] "RemoveContainer" containerID="855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.261910 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qxt5l"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.264919 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qxt5l"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.275692 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.276215 4823 scope.go:117] "RemoveContainer" containerID="855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468" Jan 21 17:21:41 crc kubenswrapper[4823]: E0121 17:21:41.276693 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468\": container with ID starting with 855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468 not found: ID does not exist" containerID="855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.276731 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468"} err="failed to get container status \"855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468\": rpc error: code = NotFound desc = could not find container \"855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468\": container with ID starting with 855d45ccace992c5e4001d52141cf776089ba46eb36234fa94a107cd9b978468 not found: ID does not exist" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.279893 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jrzz"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.350220 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9647bb1-3272-4e92-8b16-ff16a90dfa8d" path="/var/lib/kubelet/pods/c9647bb1-3272-4e92-8b16-ff16a90dfa8d/volumes" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.350821 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" path="/var/lib/kubelet/pods/e8ccfed5-2aa0-4c5b-a7cc-9714c379d620/volumes" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.617944 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t"] Jan 21 17:21:41 crc kubenswrapper[4823]: E0121 17:21:41.618294 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9647bb1-3272-4e92-8b16-ff16a90dfa8d" containerName="route-controller-manager" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.618318 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9647bb1-3272-4e92-8b16-ff16a90dfa8d" containerName="route-controller-manager" Jan 21 17:21:41 crc kubenswrapper[4823]: E0121 17:21:41.618334 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.618349 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 17:21:41 crc kubenswrapper[4823]: E0121 17:21:41.618388 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" containerName="controller-manager" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.618402 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" containerName="controller-manager" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.618596 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9647bb1-3272-4e92-8b16-ff16a90dfa8d" containerName="route-controller-manager" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.618616 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.618637 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ccfed5-2aa0-4c5b-a7cc-9714c379d620" containerName="controller-manager" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.619343 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.621726 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.622110 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.622176 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.622651 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.622674 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.622714 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.624386 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-fm687"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.625292 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.630064 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.630172 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.630342 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.630189 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.630448 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.630537 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.643251 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-fm687"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.648985 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.650397 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t"] Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.750614 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-client-ca\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.750673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-config\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.750697 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab41d6d2-3164-4da1-b14f-58a747be91db-serving-cert\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.750728 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-serving-cert\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.750754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7jw\" (UniqueName: \"kubernetes.io/projected/ab41d6d2-3164-4da1-b14f-58a747be91db-kube-api-access-bf7jw\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.750809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.751054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkkh2\" (UniqueName: \"kubernetes.io/projected/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-kube-api-access-qkkh2\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.751224 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-config\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.751326 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-client-ca\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852454 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkh2\" (UniqueName: \"kubernetes.io/projected/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-kube-api-access-qkkh2\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852518 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-config\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852546 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-client-ca\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-client-ca\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-config\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab41d6d2-3164-4da1-b14f-58a747be91db-serving-cert\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-serving-cert\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf7jw\" (UniqueName: \"kubernetes.io/projected/ab41d6d2-3164-4da1-b14f-58a747be91db-kube-api-access-bf7jw\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.852705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.853749 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-client-ca\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.856616 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-client-ca\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.857687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.857887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-config\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.860212 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-serving-cert\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.860837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab41d6d2-3164-4da1-b14f-58a747be91db-serving-cert\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.861189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-config\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.876789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf7jw\" (UniqueName: \"kubernetes.io/projected/ab41d6d2-3164-4da1-b14f-58a747be91db-kube-api-access-bf7jw\") pod \"controller-manager-6bf9bfbb56-2zn9t\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.876930 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkh2\" (UniqueName: \"kubernetes.io/projected/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-kube-api-access-qkkh2\") pod \"route-controller-manager-6c967b544-fm687\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.954623 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:41 crc kubenswrapper[4823]: I0121 17:21:41.966925 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:42 crc kubenswrapper[4823]: I0121 17:21:42.155217 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t"] Jan 21 17:21:42 crc kubenswrapper[4823]: W0121 17:21:42.170564 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab41d6d2_3164_4da1_b14f_58a747be91db.slice/crio-e6821372a8efa7ce487e1bddd58c568c696ced75f852bd2562ff10fcbe3d4d33 WatchSource:0}: Error finding container e6821372a8efa7ce487e1bddd58c568c696ced75f852bd2562ff10fcbe3d4d33: Status 404 returned error can't find the container with id e6821372a8efa7ce487e1bddd58c568c696ced75f852bd2562ff10fcbe3d4d33 Jan 21 17:21:42 crc kubenswrapper[4823]: I0121 17:21:42.218676 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-fm687"] Jan 21 17:21:42 crc kubenswrapper[4823]: W0121 17:21:42.223497 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a043f46_c234_49f9_a3d7_7b1d2ab502e2.slice/crio-89387ffb8d90206005b9177f03ce60a41d30307324b3ca0cf51d01e8b8afa06d WatchSource:0}: Error finding container 89387ffb8d90206005b9177f03ce60a41d30307324b3ca0cf51d01e8b8afa06d: Status 404 returned error can't find the container with id 89387ffb8d90206005b9177f03ce60a41d30307324b3ca0cf51d01e8b8afa06d Jan 21 17:21:42 crc kubenswrapper[4823]: I0121 17:21:42.245149 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" event={"ID":"8a043f46-c234-49f9-a3d7-7b1d2ab502e2","Type":"ContainerStarted","Data":"89387ffb8d90206005b9177f03ce60a41d30307324b3ca0cf51d01e8b8afa06d"} Jan 21 17:21:42 crc kubenswrapper[4823]: I0121 17:21:42.247474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" event={"ID":"ab41d6d2-3164-4da1-b14f-58a747be91db","Type":"ContainerStarted","Data":"e6821372a8efa7ce487e1bddd58c568c696ced75f852bd2562ff10fcbe3d4d33"} Jan 21 17:21:42 crc kubenswrapper[4823]: I0121 17:21:42.283674 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 17:21:42 crc kubenswrapper[4823]: I0121 17:21:42.392204 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.258542 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" event={"ID":"ab41d6d2-3164-4da1-b14f-58a747be91db","Type":"ContainerStarted","Data":"0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9"} Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.259156 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.261714 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" event={"ID":"8a043f46-c234-49f9-a3d7-7b1d2ab502e2","Type":"ContainerStarted","Data":"d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb"} Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.261877 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.264902 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.268917 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:43 crc kubenswrapper[4823]: I0121 17:21:43.276090 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" podStartSLOduration=3.276074488 podStartE2EDuration="3.276074488s" podCreationTimestamp="2026-01-21 17:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:21:43.274480868 +0000 UTC m=+304.200611728" watchObservedRunningTime="2026-01-21 17:21:43.276074488 +0000 UTC m=+304.202205348" Jan 21 17:21:44 crc kubenswrapper[4823]: I0121 17:21:44.713828 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" podStartSLOduration=4.713801758 podStartE2EDuration="4.713801758s" podCreationTimestamp="2026-01-21 17:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:21:43.311563262 +0000 UTC m=+304.237694142" watchObservedRunningTime="2026-01-21 17:21:44.713801758 +0000 UTC m=+305.639932618" Jan 21 17:21:44 crc kubenswrapper[4823]: I0121 17:21:44.716040 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t"] Jan 21 17:21:44 crc kubenswrapper[4823]: I0121 17:21:44.745256 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-fm687"] Jan 21 17:21:46 crc kubenswrapper[4823]: I0121 17:21:46.276909 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" podUID="ab41d6d2-3164-4da1-b14f-58a747be91db" containerName="controller-manager" containerID="cri-o://0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9" gracePeriod=30 Jan 21 17:21:46 crc kubenswrapper[4823]: I0121 17:21:46.277424 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" podUID="8a043f46-c234-49f9-a3d7-7b1d2ab502e2" containerName="route-controller-manager" containerID="cri-o://d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb" gracePeriod=30 Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.228671 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.273496 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p"] Jan 21 17:21:47 crc kubenswrapper[4823]: E0121 17:21:47.273761 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a043f46-c234-49f9-a3d7-7b1d2ab502e2" containerName="route-controller-manager" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.273774 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a043f46-c234-49f9-a3d7-7b1d2ab502e2" containerName="route-controller-manager" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.273917 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a043f46-c234-49f9-a3d7-7b1d2ab502e2" containerName="route-controller-manager" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.274407 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.279549 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.285572 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a043f46-c234-49f9-a3d7-7b1d2ab502e2" containerID="d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb" exitCode=0 Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.285631 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.285636 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" event={"ID":"8a043f46-c234-49f9-a3d7-7b1d2ab502e2","Type":"ContainerDied","Data":"d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb"} Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.285759 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-fm687" event={"ID":"8a043f46-c234-49f9-a3d7-7b1d2ab502e2","Type":"ContainerDied","Data":"89387ffb8d90206005b9177f03ce60a41d30307324b3ca0cf51d01e8b8afa06d"} Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.285780 4823 scope.go:117] "RemoveContainer" containerID="d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.288431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p"] Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.289252 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab41d6d2-3164-4da1-b14f-58a747be91db" containerID="0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9" exitCode=0 Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.289295 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" event={"ID":"ab41d6d2-3164-4da1-b14f-58a747be91db","Type":"ContainerDied","Data":"0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9"} Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.289320 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" event={"ID":"ab41d6d2-3164-4da1-b14f-58a747be91db","Type":"ContainerDied","Data":"e6821372a8efa7ce487e1bddd58c568c696ced75f852bd2562ff10fcbe3d4d33"} Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.289585 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.306609 4823 scope.go:117] "RemoveContainer" containerID="d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb" Jan 21 17:21:47 crc kubenswrapper[4823]: E0121 17:21:47.307180 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb\": container with ID starting with d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb not found: ID does not exist" containerID="d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.307217 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb"} err="failed to get container status \"d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb\": rpc error: code = NotFound desc = could not find container \"d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb\": container with ID starting with d9de59c5e6bf19a0115e8bcbba4ac7014ebeb2fba28a595661e210083179f0eb not found: ID does not exist" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.307274 4823 scope.go:117] "RemoveContainer" containerID="0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.327310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkkh2\" (UniqueName: \"kubernetes.io/projected/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-kube-api-access-qkkh2\") pod \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.327382 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-config\") pod \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.327466 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-client-ca\") pod \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.327510 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-serving-cert\") pod \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\" (UID: \"8a043f46-c234-49f9-a3d7-7b1d2ab502e2\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.328784 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-config" (OuterVolumeSpecName: "config") pod "8a043f46-c234-49f9-a3d7-7b1d2ab502e2" (UID: "8a043f46-c234-49f9-a3d7-7b1d2ab502e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.329280 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-client-ca" (OuterVolumeSpecName: "client-ca") pod "8a043f46-c234-49f9-a3d7-7b1d2ab502e2" (UID: "8a043f46-c234-49f9-a3d7-7b1d2ab502e2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.330217 4823 scope.go:117] "RemoveContainer" containerID="0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9" Jan 21 17:21:47 crc kubenswrapper[4823]: E0121 17:21:47.359380 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9\": container with ID starting with 0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9 not found: ID does not exist" containerID="0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.359409 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9"} err="failed to get container status \"0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9\": rpc error: code = NotFound desc = could not find container \"0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9\": container with ID starting with 0d5edc182e19925632c840f0e6106fff0ffa2d3019efa7c696fc76e777aaefc9 not found: ID does not exist" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.360563 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8a043f46-c234-49f9-a3d7-7b1d2ab502e2" (UID: "8a043f46-c234-49f9-a3d7-7b1d2ab502e2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.360593 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-kube-api-access-qkkh2" (OuterVolumeSpecName: "kube-api-access-qkkh2") pod "8a043f46-c234-49f9-a3d7-7b1d2ab502e2" (UID: "8a043f46-c234-49f9-a3d7-7b1d2ab502e2"). InnerVolumeSpecName "kube-api-access-qkkh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.429357 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf7jw\" (UniqueName: \"kubernetes.io/projected/ab41d6d2-3164-4da1-b14f-58a747be91db-kube-api-access-bf7jw\") pod \"ab41d6d2-3164-4da1-b14f-58a747be91db\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.429404 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-client-ca\") pod \"ab41d6d2-3164-4da1-b14f-58a747be91db\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.429433 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab41d6d2-3164-4da1-b14f-58a747be91db-serving-cert\") pod \"ab41d6d2-3164-4da1-b14f-58a747be91db\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.429455 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-config\") pod \"ab41d6d2-3164-4da1-b14f-58a747be91db\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.429472 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-proxy-ca-bundles\") pod \"ab41d6d2-3164-4da1-b14f-58a747be91db\" (UID: \"ab41d6d2-3164-4da1-b14f-58a747be91db\") " Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.430802 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-client-ca" (OuterVolumeSpecName: "client-ca") pod "ab41d6d2-3164-4da1-b14f-58a747be91db" (UID: "ab41d6d2-3164-4da1-b14f-58a747be91db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.430837 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-serving-cert\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.430887 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-config\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431002 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ab41d6d2-3164-4da1-b14f-58a747be91db" (UID: "ab41d6d2-3164-4da1-b14f-58a747be91db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431066 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-client-ca\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431085 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t69w6\" (UniqueName: \"kubernetes.io/projected/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-kube-api-access-t69w6\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431126 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-config" (OuterVolumeSpecName: "config") pod "ab41d6d2-3164-4da1-b14f-58a747be91db" (UID: "ab41d6d2-3164-4da1-b14f-58a747be91db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431134 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431174 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431185 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431194 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431205 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkkh2\" (UniqueName: \"kubernetes.io/projected/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-kube-api-access-qkkh2\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.431217 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a043f46-c234-49f9-a3d7-7b1d2ab502e2-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.434347 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab41d6d2-3164-4da1-b14f-58a747be91db-kube-api-access-bf7jw" (OuterVolumeSpecName: "kube-api-access-bf7jw") pod "ab41d6d2-3164-4da1-b14f-58a747be91db" (UID: "ab41d6d2-3164-4da1-b14f-58a747be91db"). InnerVolumeSpecName "kube-api-access-bf7jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.435133 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab41d6d2-3164-4da1-b14f-58a747be91db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ab41d6d2-3164-4da1-b14f-58a747be91db" (UID: "ab41d6d2-3164-4da1-b14f-58a747be91db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-client-ca\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542392 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t69w6\" (UniqueName: \"kubernetes.io/projected/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-kube-api-access-t69w6\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542446 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-serving-cert\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542470 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-config\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542597 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf7jw\" (UniqueName: \"kubernetes.io/projected/ab41d6d2-3164-4da1-b14f-58a747be91db-kube-api-access-bf7jw\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542612 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab41d6d2-3164-4da1-b14f-58a747be91db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.542626 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab41d6d2-3164-4da1-b14f-58a747be91db-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.543820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-config\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.546707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-client-ca\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.551814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-serving-cert\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.573961 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t69w6\" (UniqueName: \"kubernetes.io/projected/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-kube-api-access-t69w6\") pod \"route-controller-manager-7cdbf6b485-vfk9p\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.594654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.617226 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-fm687"] Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.621675 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-fm687"] Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.635551 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t"] Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.640586 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-2zn9t"] Jan 21 17:21:47 crc kubenswrapper[4823]: I0121 17:21:47.783895 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p"] Jan 21 17:21:48 crc kubenswrapper[4823]: I0121 17:21:48.297743 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" event={"ID":"04be5ced-7fba-46c1-a0c9-5fbbc004ffea","Type":"ContainerStarted","Data":"c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa"} Jan 21 17:21:48 crc kubenswrapper[4823]: I0121 17:21:48.297790 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" event={"ID":"04be5ced-7fba-46c1-a0c9-5fbbc004ffea","Type":"ContainerStarted","Data":"e0152aae9ce65cbe4cae44865a0345dc366fda7d97ae4318ca557590b134fb5c"} Jan 21 17:21:48 crc kubenswrapper[4823]: I0121 17:21:48.298196 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:48 crc kubenswrapper[4823]: I0121 17:21:48.303661 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:21:48 crc kubenswrapper[4823]: I0121 17:21:48.317329 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" podStartSLOduration=3.317313941 podStartE2EDuration="3.317313941s" podCreationTimestamp="2026-01-21 17:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:21:48.315391183 +0000 UTC m=+309.241522083" watchObservedRunningTime="2026-01-21 17:21:48.317313941 +0000 UTC m=+309.243444801" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.350030 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a043f46-c234-49f9-a3d7-7b1d2ab502e2" path="/var/lib/kubelet/pods/8a043f46-c234-49f9-a3d7-7b1d2ab502e2/volumes" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.351402 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab41d6d2-3164-4da1-b14f-58a747be91db" path="/var/lib/kubelet/pods/ab41d6d2-3164-4da1-b14f-58a747be91db/volumes" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.627806 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf"] Jan 21 17:21:49 crc kubenswrapper[4823]: E0121 17:21:49.628046 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab41d6d2-3164-4da1-b14f-58a747be91db" containerName="controller-manager" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.628059 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab41d6d2-3164-4da1-b14f-58a747be91db" containerName="controller-manager" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.628136 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab41d6d2-3164-4da1-b14f-58a747be91db" containerName="controller-manager" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.628456 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.630526 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.630903 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.631200 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.631412 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.631630 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.634744 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.637123 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.646279 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf"] Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.769979 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-client-ca\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.770065 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcx8v\" (UniqueName: \"kubernetes.io/projected/455231a6-2301-4370-86c6-f824ff47ddf4-kube-api-access-jcx8v\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.770185 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.770239 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-config\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.770276 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455231a6-2301-4370-86c6-f824ff47ddf4-serving-cert\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.871011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcx8v\" (UniqueName: \"kubernetes.io/projected/455231a6-2301-4370-86c6-f824ff47ddf4-kube-api-access-jcx8v\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.871068 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.871097 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-config\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.871115 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455231a6-2301-4370-86c6-f824ff47ddf4-serving-cert\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.871164 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-client-ca\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.872035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-client-ca\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.872494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.875554 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-config\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.885299 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455231a6-2301-4370-86c6-f824ff47ddf4-serving-cert\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.894773 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcx8v\" (UniqueName: \"kubernetes.io/projected/455231a6-2301-4370-86c6-f824ff47ddf4-kube-api-access-jcx8v\") pod \"controller-manager-59c7bcfdd9-lkzbf\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:49 crc kubenswrapper[4823]: I0121 17:21:49.946379 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:50 crc kubenswrapper[4823]: I0121 17:21:50.127618 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf"] Jan 21 17:21:50 crc kubenswrapper[4823]: I0121 17:21:50.308055 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" event={"ID":"455231a6-2301-4370-86c6-f824ff47ddf4","Type":"ContainerStarted","Data":"58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6"} Jan 21 17:21:50 crc kubenswrapper[4823]: I0121 17:21:50.308103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" event={"ID":"455231a6-2301-4370-86c6-f824ff47ddf4","Type":"ContainerStarted","Data":"37614affae6324d02f681c698737d2be26b63448139d06c93ec1f44419ae8cc4"} Jan 21 17:21:50 crc kubenswrapper[4823]: I0121 17:21:50.321964 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" podStartSLOduration=5.3219448 podStartE2EDuration="5.3219448s" podCreationTimestamp="2026-01-21 17:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:21:50.321131729 +0000 UTC m=+311.247262609" watchObservedRunningTime="2026-01-21 17:21:50.3219448 +0000 UTC m=+311.248075660" Jan 21 17:21:51 crc kubenswrapper[4823]: I0121 17:21:51.314102 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:51 crc kubenswrapper[4823]: I0121 17:21:51.321460 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.062162 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nfcp9"] Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.063344 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.067007 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.078125 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfcp9"] Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.205180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07eebc10-1379-4c90-b506-ab6d548702f2-catalog-content\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.205533 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbhk\" (UniqueName: \"kubernetes.io/projected/07eebc10-1379-4c90-b506-ab6d548702f2-kube-api-access-cxbhk\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.205639 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07eebc10-1379-4c90-b506-ab6d548702f2-utilities\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.260154 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nlg7m"] Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.261205 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.263611 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.279401 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlg7m"] Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.306964 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07eebc10-1379-4c90-b506-ab6d548702f2-catalog-content\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.307054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbhk\" (UniqueName: \"kubernetes.io/projected/07eebc10-1379-4c90-b506-ab6d548702f2-kube-api-access-cxbhk\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.307087 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07eebc10-1379-4c90-b506-ab6d548702f2-utilities\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.307581 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07eebc10-1379-4c90-b506-ab6d548702f2-catalog-content\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.307639 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07eebc10-1379-4c90-b506-ab6d548702f2-utilities\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.329031 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbhk\" (UniqueName: \"kubernetes.io/projected/07eebc10-1379-4c90-b506-ab6d548702f2-kube-api-access-cxbhk\") pod \"redhat-operators-nfcp9\" (UID: \"07eebc10-1379-4c90-b506-ab6d548702f2\") " pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.382282 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.409736 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-utilities\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.409782 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-catalog-content\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.409873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hmv\" (UniqueName: \"kubernetes.io/projected/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-kube-api-access-k5hmv\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.511577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-utilities\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.511899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-catalog-content\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.511977 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hmv\" (UniqueName: \"kubernetes.io/projected/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-kube-api-access-k5hmv\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.512557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-catalog-content\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.512615 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-utilities\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.532611 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hmv\" (UniqueName: \"kubernetes.io/projected/3235d0ea-4f5c-4b04-b519-c8c7561a41ee-kube-api-access-k5hmv\") pod \"redhat-marketplace-nlg7m\" (UID: \"3235d0ea-4f5c-4b04-b519-c8c7561a41ee\") " pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.583214 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.793600 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfcp9"] Jan 21 17:21:52 crc kubenswrapper[4823]: W0121 17:21:52.808297 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07eebc10_1379_4c90_b506_ab6d548702f2.slice/crio-d81665eea0fe4ce40642379d59e9a2b893cdb8834879c1219b27abe9743c6ca6 WatchSource:0}: Error finding container d81665eea0fe4ce40642379d59e9a2b893cdb8834879c1219b27abe9743c6ca6: Status 404 returned error can't find the container with id d81665eea0fe4ce40642379d59e9a2b893cdb8834879c1219b27abe9743c6ca6 Jan 21 17:21:52 crc kubenswrapper[4823]: I0121 17:21:52.974814 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlg7m"] Jan 21 17:21:53 crc kubenswrapper[4823]: W0121 17:21:53.011481 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3235d0ea_4f5c_4b04_b519_c8c7561a41ee.slice/crio-3787ae1777c3956305f221aabef88a2d917fedff992fb10086b2efae71e53d4c WatchSource:0}: Error finding container 3787ae1777c3956305f221aabef88a2d917fedff992fb10086b2efae71e53d4c: Status 404 returned error can't find the container with id 3787ae1777c3956305f221aabef88a2d917fedff992fb10086b2efae71e53d4c Jan 21 17:21:53 crc kubenswrapper[4823]: I0121 17:21:53.325805 4823 generic.go:334] "Generic (PLEG): container finished" podID="3235d0ea-4f5c-4b04-b519-c8c7561a41ee" containerID="7d23ca6c8afed4dcc8a64e6bd483b6cb001eb00b1d9d6afdb5d4417f5cc69a4f" exitCode=0 Jan 21 17:21:53 crc kubenswrapper[4823]: I0121 17:21:53.325903 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlg7m" event={"ID":"3235d0ea-4f5c-4b04-b519-c8c7561a41ee","Type":"ContainerDied","Data":"7d23ca6c8afed4dcc8a64e6bd483b6cb001eb00b1d9d6afdb5d4417f5cc69a4f"} Jan 21 17:21:53 crc kubenswrapper[4823]: I0121 17:21:53.325937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlg7m" event={"ID":"3235d0ea-4f5c-4b04-b519-c8c7561a41ee","Type":"ContainerStarted","Data":"3787ae1777c3956305f221aabef88a2d917fedff992fb10086b2efae71e53d4c"} Jan 21 17:21:53 crc kubenswrapper[4823]: I0121 17:21:53.328914 4823 generic.go:334] "Generic (PLEG): container finished" podID="07eebc10-1379-4c90-b506-ab6d548702f2" containerID="dbcd3b79f5be6591278e86dbc5b04596f0e6200045c14187f59184a857200615" exitCode=0 Jan 21 17:21:53 crc kubenswrapper[4823]: I0121 17:21:53.329983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfcp9" event={"ID":"07eebc10-1379-4c90-b506-ab6d548702f2","Type":"ContainerDied","Data":"dbcd3b79f5be6591278e86dbc5b04596f0e6200045c14187f59184a857200615"} Jan 21 17:21:53 crc kubenswrapper[4823]: I0121 17:21:53.330006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfcp9" event={"ID":"07eebc10-1379-4c90-b506-ab6d548702f2","Type":"ContainerStarted","Data":"d81665eea0fe4ce40642379d59e9a2b893cdb8834879c1219b27abe9743c6ca6"} Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.335345 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfcp9" event={"ID":"07eebc10-1379-4c90-b506-ab6d548702f2","Type":"ContainerStarted","Data":"e6200f69514c26397c2b68447ae4233b4935e98b770b92369d33bc7b54d49b5e"} Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.337439 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlg7m" event={"ID":"3235d0ea-4f5c-4b04-b519-c8c7561a41ee","Type":"ContainerStarted","Data":"838c5f17a262aa23b458e11567473260d7b7a6c1f46f77c089318db11e24fd14"} Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.462687 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dj79q"] Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.464351 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.467122 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dj79q"] Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.468948 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.469648 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.634295 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-catalog-content\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.634344 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/f48059ea-e3f3-4b21-a108-ea45d532536a-kube-api-access-slj22\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.634372 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-utilities\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.657110 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9slhm"] Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.658272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.661136 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.683595 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9slhm"] Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.735223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-catalog-content\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.735281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/f48059ea-e3f3-4b21-a108-ea45d532536a-kube-api-access-slj22\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.735315 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-utilities\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.735716 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-catalog-content\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.735746 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-utilities\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.758129 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/f48059ea-e3f3-4b21-a108-ea45d532536a-kube-api-access-slj22\") pod \"certified-operators-dj79q\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.779413 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.836259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c777ae33-b9b8-4f1a-a718-1c979513d33b-catalog-content\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.836764 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c777ae33-b9b8-4f1a-a718-1c979513d33b-utilities\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.836833 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7s4\" (UniqueName: \"kubernetes.io/projected/c777ae33-b9b8-4f1a-a718-1c979513d33b-kube-api-access-9q7s4\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.938259 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7s4\" (UniqueName: \"kubernetes.io/projected/c777ae33-b9b8-4f1a-a718-1c979513d33b-kube-api-access-9q7s4\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.938326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c777ae33-b9b8-4f1a-a718-1c979513d33b-catalog-content\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.938358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c777ae33-b9b8-4f1a-a718-1c979513d33b-utilities\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.938982 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c777ae33-b9b8-4f1a-a718-1c979513d33b-utilities\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.939021 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c777ae33-b9b8-4f1a-a718-1c979513d33b-catalog-content\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.958002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7s4\" (UniqueName: \"kubernetes.io/projected/c777ae33-b9b8-4f1a-a718-1c979513d33b-kube-api-access-9q7s4\") pod \"community-operators-9slhm\" (UID: \"c777ae33-b9b8-4f1a-a718-1c979513d33b\") " pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:54 crc kubenswrapper[4823]: I0121 17:21:54.971440 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.184350 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dj79q"] Jan 21 17:21:55 crc kubenswrapper[4823]: W0121 17:21:55.192269 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf48059ea_e3f3_4b21_a108_ea45d532536a.slice/crio-9a2a03f36200696e4e5c24744485ac4042dce918287ff19e1fa500090a26e5d0 WatchSource:0}: Error finding container 9a2a03f36200696e4e5c24744485ac4042dce918287ff19e1fa500090a26e5d0: Status 404 returned error can't find the container with id 9a2a03f36200696e4e5c24744485ac4042dce918287ff19e1fa500090a26e5d0 Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.344090 4823 generic.go:334] "Generic (PLEG): container finished" podID="07eebc10-1379-4c90-b506-ab6d548702f2" containerID="e6200f69514c26397c2b68447ae4233b4935e98b770b92369d33bc7b54d49b5e" exitCode=0 Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.352374 4823 generic.go:334] "Generic (PLEG): container finished" podID="3235d0ea-4f5c-4b04-b519-c8c7561a41ee" containerID="838c5f17a262aa23b458e11567473260d7b7a6c1f46f77c089318db11e24fd14" exitCode=0 Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.369453 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfcp9" event={"ID":"07eebc10-1379-4c90-b506-ab6d548702f2","Type":"ContainerDied","Data":"e6200f69514c26397c2b68447ae4233b4935e98b770b92369d33bc7b54d49b5e"} Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.369511 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerStarted","Data":"528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c"} Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.369537 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerStarted","Data":"9a2a03f36200696e4e5c24744485ac4042dce918287ff19e1fa500090a26e5d0"} Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.369550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlg7m" event={"ID":"3235d0ea-4f5c-4b04-b519-c8c7561a41ee","Type":"ContainerDied","Data":"838c5f17a262aa23b458e11567473260d7b7a6c1f46f77c089318db11e24fd14"} Jan 21 17:21:55 crc kubenswrapper[4823]: I0121 17:21:55.428491 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9slhm"] Jan 21 17:21:55 crc kubenswrapper[4823]: W0121 17:21:55.433802 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc777ae33_b9b8_4f1a_a718_1c979513d33b.slice/crio-fc984c8ba9c0fff8ec4a3ef9549b43e851ee3dc639095191a08860e53d2b050b WatchSource:0}: Error finding container fc984c8ba9c0fff8ec4a3ef9549b43e851ee3dc639095191a08860e53d2b050b: Status 404 returned error can't find the container with id fc984c8ba9c0fff8ec4a3ef9549b43e851ee3dc639095191a08860e53d2b050b Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.357996 4823 generic.go:334] "Generic (PLEG): container finished" podID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerID="528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c" exitCode=0 Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.358291 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerDied","Data":"528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c"} Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.362298 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlg7m" event={"ID":"3235d0ea-4f5c-4b04-b519-c8c7561a41ee","Type":"ContainerStarted","Data":"941271bb90208debf9b830ce89c4b40d088222f87a25d3289977ac3ded1d13f2"} Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.364546 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfcp9" event={"ID":"07eebc10-1379-4c90-b506-ab6d548702f2","Type":"ContainerStarted","Data":"d2ccd19ff46f1b8b671b95c9b725e4b6677c5973957470ba2debc5e5b1a4dc8e"} Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.365857 4823 generic.go:334] "Generic (PLEG): container finished" podID="c777ae33-b9b8-4f1a-a718-1c979513d33b" containerID="fc4dd1dbc889dd85b2d0dd6c4200ede7c29cfae584c9c2e14ba6f593642e11bf" exitCode=0 Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.365921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slhm" event={"ID":"c777ae33-b9b8-4f1a-a718-1c979513d33b","Type":"ContainerDied","Data":"fc4dd1dbc889dd85b2d0dd6c4200ede7c29cfae584c9c2e14ba6f593642e11bf"} Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.365942 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slhm" event={"ID":"c777ae33-b9b8-4f1a-a718-1c979513d33b","Type":"ContainerStarted","Data":"fc984c8ba9c0fff8ec4a3ef9549b43e851ee3dc639095191a08860e53d2b050b"} Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.418087 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nfcp9" podStartSLOduration=1.648395361 podStartE2EDuration="4.418068046s" podCreationTimestamp="2026-01-21 17:21:52 +0000 UTC" firstStartedPulling="2026-01-21 17:21:53.332497383 +0000 UTC m=+314.258628253" lastFinishedPulling="2026-01-21 17:21:56.102170078 +0000 UTC m=+317.028300938" observedRunningTime="2026-01-21 17:21:56.415712867 +0000 UTC m=+317.341843737" watchObservedRunningTime="2026-01-21 17:21:56.418068046 +0000 UTC m=+317.344198906" Jan 21 17:21:56 crc kubenswrapper[4823]: I0121 17:21:56.441451 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nlg7m" podStartSLOduration=1.821026641 podStartE2EDuration="4.441432648s" podCreationTimestamp="2026-01-21 17:21:52 +0000 UTC" firstStartedPulling="2026-01-21 17:21:53.328265998 +0000 UTC m=+314.254396868" lastFinishedPulling="2026-01-21 17:21:55.948672025 +0000 UTC m=+316.874802875" observedRunningTime="2026-01-21 17:21:56.437252534 +0000 UTC m=+317.363383414" watchObservedRunningTime="2026-01-21 17:21:56.441432648 +0000 UTC m=+317.367563508" Jan 21 17:21:57 crc kubenswrapper[4823]: I0121 17:21:57.063644 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 17:21:58 crc kubenswrapper[4823]: I0121 17:21:58.285800 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 17:21:58 crc kubenswrapper[4823]: I0121 17:21:58.391156 4823 generic.go:334] "Generic (PLEG): container finished" podID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerID="f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071" exitCode=0 Jan 21 17:21:58 crc kubenswrapper[4823]: I0121 17:21:58.391219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerDied","Data":"f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071"} Jan 21 17:21:58 crc kubenswrapper[4823]: I0121 17:21:58.395032 4823 generic.go:334] "Generic (PLEG): container finished" podID="c777ae33-b9b8-4f1a-a718-1c979513d33b" containerID="ebd39287431f102bd7467aa7f4357f03079882b3abd72412d8d74e8a20ff09e1" exitCode=0 Jan 21 17:21:58 crc kubenswrapper[4823]: I0121 17:21:58.395074 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slhm" event={"ID":"c777ae33-b9b8-4f1a-a718-1c979513d33b","Type":"ContainerDied","Data":"ebd39287431f102bd7467aa7f4357f03079882b3abd72412d8d74e8a20ff09e1"} Jan 21 17:21:59 crc kubenswrapper[4823]: I0121 17:21:59.400793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slhm" event={"ID":"c777ae33-b9b8-4f1a-a718-1c979513d33b","Type":"ContainerStarted","Data":"815c182197ca3e1f38a822799fa33f176331e1028484b0dc85550f4786933b63"} Jan 21 17:21:59 crc kubenswrapper[4823]: I0121 17:21:59.417586 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9slhm" podStartSLOduration=2.775530498 podStartE2EDuration="5.417567005s" podCreationTimestamp="2026-01-21 17:21:54 +0000 UTC" firstStartedPulling="2026-01-21 17:21:56.36766799 +0000 UTC m=+317.293798850" lastFinishedPulling="2026-01-21 17:21:59.009704497 +0000 UTC m=+319.935835357" observedRunningTime="2026-01-21 17:21:59.417232767 +0000 UTC m=+320.343363627" watchObservedRunningTime="2026-01-21 17:21:59.417567005 +0000 UTC m=+320.343697865" Jan 21 17:22:00 crc kubenswrapper[4823]: I0121 17:22:00.416038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerStarted","Data":"196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a"} Jan 21 17:22:00 crc kubenswrapper[4823]: I0121 17:22:00.434992 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dj79q" podStartSLOduration=3.56253081 podStartE2EDuration="6.434975695s" podCreationTimestamp="2026-01-21 17:21:54 +0000 UTC" firstStartedPulling="2026-01-21 17:21:56.359362924 +0000 UTC m=+317.285493784" lastFinishedPulling="2026-01-21 17:21:59.231807809 +0000 UTC m=+320.157938669" observedRunningTime="2026-01-21 17:22:00.434199296 +0000 UTC m=+321.360330166" watchObservedRunningTime="2026-01-21 17:22:00.434975695 +0000 UTC m=+321.361106555" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.383879 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.384009 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.451250 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.490290 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nfcp9" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.584210 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.584588 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:22:02 crc kubenswrapper[4823]: I0121 17:22:02.627000 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:22:03 crc kubenswrapper[4823]: I0121 17:22:03.476138 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nlg7m" Jan 21 17:22:04 crc kubenswrapper[4823]: I0121 17:22:04.779974 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:22:04 crc kubenswrapper[4823]: I0121 17:22:04.780473 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:22:04 crc kubenswrapper[4823]: I0121 17:22:04.834440 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:22:04 crc kubenswrapper[4823]: I0121 17:22:04.973967 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:22:04 crc kubenswrapper[4823]: I0121 17:22:04.974580 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:22:05 crc kubenswrapper[4823]: I0121 17:22:05.025158 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:22:05 crc kubenswrapper[4823]: I0121 17:22:05.599880 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9slhm" Jan 21 17:22:05 crc kubenswrapper[4823]: I0121 17:22:05.604545 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 17:22:15 crc kubenswrapper[4823]: I0121 17:22:15.070379 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:15 crc kubenswrapper[4823]: I0121 17:22:15.071839 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.039531 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf"] Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.041300 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" podUID="455231a6-2301-4370-86c6-f824ff47ddf4" containerName="controller-manager" containerID="cri-o://58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6" gracePeriod=30 Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.060279 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p"] Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.060554 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" podUID="04be5ced-7fba-46c1-a0c9-5fbbc004ffea" containerName="route-controller-manager" containerID="cri-o://c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa" gracePeriod=30 Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.467591 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.478516 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.494585 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g4mrz"] Jan 21 17:22:40 crc kubenswrapper[4823]: E0121 17:22:40.494803 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04be5ced-7fba-46c1-a0c9-5fbbc004ffea" containerName="route-controller-manager" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.494814 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="04be5ced-7fba-46c1-a0c9-5fbbc004ffea" containerName="route-controller-manager" Jan 21 17:22:40 crc kubenswrapper[4823]: E0121 17:22:40.494827 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455231a6-2301-4370-86c6-f824ff47ddf4" containerName="controller-manager" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.494832 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="455231a6-2301-4370-86c6-f824ff47ddf4" containerName="controller-manager" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.494970 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="04be5ced-7fba-46c1-a0c9-5fbbc004ffea" containerName="route-controller-manager" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.494984 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="455231a6-2301-4370-86c6-f824ff47ddf4" containerName="controller-manager" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.495302 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.532182 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g4mrz"] Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.614953 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-client-ca\") pod \"455231a6-2301-4370-86c6-f824ff47ddf4\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615026 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-config\") pod \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615047 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455231a6-2301-4370-86c6-f824ff47ddf4-serving-cert\") pod \"455231a6-2301-4370-86c6-f824ff47ddf4\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615097 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-client-ca\") pod \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcx8v\" (UniqueName: \"kubernetes.io/projected/455231a6-2301-4370-86c6-f824ff47ddf4-kube-api-access-jcx8v\") pod \"455231a6-2301-4370-86c6-f824ff47ddf4\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615173 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-config\") pod \"455231a6-2301-4370-86c6-f824ff47ddf4\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615214 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t69w6\" (UniqueName: \"kubernetes.io/projected/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-kube-api-access-t69w6\") pod \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615233 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-proxy-ca-bundles\") pod \"455231a6-2301-4370-86c6-f824ff47ddf4\" (UID: \"455231a6-2301-4370-86c6-f824ff47ddf4\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615251 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-serving-cert\") pod \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\" (UID: \"04be5ced-7fba-46c1-a0c9-5fbbc004ffea\") " Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615375 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw44r\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-kube-api-access-kw44r\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615424 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615452 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615482 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-trusted-ca\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-registry-tls\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615536 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-bound-sa-token\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-registry-certificates\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615598 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-config" (OuterVolumeSpecName: "config") pod "04be5ced-7fba-46c1-a0c9-5fbbc004ffea" (UID: "04be5ced-7fba-46c1-a0c9-5fbbc004ffea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.615845 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-client-ca" (OuterVolumeSpecName: "client-ca") pod "455231a6-2301-4370-86c6-f824ff47ddf4" (UID: "455231a6-2301-4370-86c6-f824ff47ddf4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.616074 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-config" (OuterVolumeSpecName: "config") pod "455231a6-2301-4370-86c6-f824ff47ddf4" (UID: "455231a6-2301-4370-86c6-f824ff47ddf4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.616235 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-client-ca" (OuterVolumeSpecName: "client-ca") pod "04be5ced-7fba-46c1-a0c9-5fbbc004ffea" (UID: "04be5ced-7fba-46c1-a0c9-5fbbc004ffea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.616612 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "455231a6-2301-4370-86c6-f824ff47ddf4" (UID: "455231a6-2301-4370-86c6-f824ff47ddf4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.619615 4823 generic.go:334] "Generic (PLEG): container finished" podID="04be5ced-7fba-46c1-a0c9-5fbbc004ffea" containerID="c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa" exitCode=0 Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.619682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" event={"ID":"04be5ced-7fba-46c1-a0c9-5fbbc004ffea","Type":"ContainerDied","Data":"c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa"} Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.619714 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" event={"ID":"04be5ced-7fba-46c1-a0c9-5fbbc004ffea","Type":"ContainerDied","Data":"e0152aae9ce65cbe4cae44865a0345dc366fda7d97ae4318ca557590b134fb5c"} Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.619733 4823 scope.go:117] "RemoveContainer" containerID="c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.619814 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.620734 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455231a6-2301-4370-86c6-f824ff47ddf4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "455231a6-2301-4370-86c6-f824ff47ddf4" (UID: "455231a6-2301-4370-86c6-f824ff47ddf4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.621100 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455231a6-2301-4370-86c6-f824ff47ddf4-kube-api-access-jcx8v" (OuterVolumeSpecName: "kube-api-access-jcx8v") pod "455231a6-2301-4370-86c6-f824ff47ddf4" (UID: "455231a6-2301-4370-86c6-f824ff47ddf4"). InnerVolumeSpecName "kube-api-access-jcx8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.621714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-kube-api-access-t69w6" (OuterVolumeSpecName: "kube-api-access-t69w6") pod "04be5ced-7fba-46c1-a0c9-5fbbc004ffea" (UID: "04be5ced-7fba-46c1-a0c9-5fbbc004ffea"). InnerVolumeSpecName "kube-api-access-t69w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.622301 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04be5ced-7fba-46c1-a0c9-5fbbc004ffea" (UID: "04be5ced-7fba-46c1-a0c9-5fbbc004ffea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.623144 4823 generic.go:334] "Generic (PLEG): container finished" podID="455231a6-2301-4370-86c6-f824ff47ddf4" containerID="58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6" exitCode=0 Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.623183 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" event={"ID":"455231a6-2301-4370-86c6-f824ff47ddf4","Type":"ContainerDied","Data":"58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6"} Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.623204 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" event={"ID":"455231a6-2301-4370-86c6-f824ff47ddf4","Type":"ContainerDied","Data":"37614affae6324d02f681c698737d2be26b63448139d06c93ec1f44419ae8cc4"} Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.623324 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.642597 4823 scope.go:117] "RemoveContainer" containerID="c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa" Jan 21 17:22:40 crc kubenswrapper[4823]: E0121 17:22:40.643680 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa\": container with ID starting with c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa not found: ID does not exist" containerID="c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.643722 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa"} err="failed to get container status \"c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa\": rpc error: code = NotFound desc = could not find container \"c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa\": container with ID starting with c452ffc1544db30a2df9da9682bf6b656dc0e7a8757a0cfbd14627d2be3a1cfa not found: ID does not exist" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.643785 4823 scope.go:117] "RemoveContainer" containerID="58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.644838 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.653022 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf"] Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.657034 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-lkzbf"] Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.666013 4823 scope.go:117] "RemoveContainer" containerID="58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6" Jan 21 17:22:40 crc kubenswrapper[4823]: E0121 17:22:40.666625 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6\": container with ID starting with 58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6 not found: ID does not exist" containerID="58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.666674 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6"} err="failed to get container status \"58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6\": rpc error: code = NotFound desc = could not find container \"58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6\": container with ID starting with 58c44f34c01faa6e5fad18907bcad4ccdb72a01b77840e30c8b5b78277732ca6 not found: ID does not exist" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.717821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.717925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-trusted-ca\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.717973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-registry-tls\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-bound-sa-token\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-registry-certificates\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw44r\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-kube-api-access-kw44r\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718204 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcx8v\" (UniqueName: \"kubernetes.io/projected/455231a6-2301-4370-86c6-f824ff47ddf4-kube-api-access-jcx8v\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718218 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718231 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t69w6\" (UniqueName: \"kubernetes.io/projected/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-kube-api-access-t69w6\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718256 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718269 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718282 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/455231a6-2301-4370-86c6-f824ff47ddf4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718294 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718308 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455231a6-2301-4370-86c6-f824ff47ddf4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.718319 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04be5ced-7fba-46c1-a0c9-5fbbc004ffea-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.719528 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.720512 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-registry-certificates\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.723114 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-registry-tls\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.723307 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.723724 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-trusted-ca\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.734395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-bound-sa-token\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.738452 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw44r\" (UniqueName: \"kubernetes.io/projected/55051d12-ed8e-44e4-9d03-3699a2ae1cb5-kube-api-access-kw44r\") pod \"image-registry-66df7c8f76-g4mrz\" (UID: \"55051d12-ed8e-44e4-9d03-3699a2ae1cb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.807066 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.947236 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p"] Jan 21 17:22:40 crc kubenswrapper[4823]: I0121 17:22:40.951390 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-vfk9p"] Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.226431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g4mrz"] Jan 21 17:22:41 crc kubenswrapper[4823]: W0121 17:22:41.236579 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55051d12_ed8e_44e4_9d03_3699a2ae1cb5.slice/crio-a1951575f1f4598ed480e64c177eaab01cbaa98b27e8ee10aa83e6a7ad059952 WatchSource:0}: Error finding container a1951575f1f4598ed480e64c177eaab01cbaa98b27e8ee10aa83e6a7ad059952: Status 404 returned error can't find the container with id a1951575f1f4598ed480e64c177eaab01cbaa98b27e8ee10aa83e6a7ad059952 Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.350662 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04be5ced-7fba-46c1-a0c9-5fbbc004ffea" path="/var/lib/kubelet/pods/04be5ced-7fba-46c1-a0c9-5fbbc004ffea/volumes" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.351488 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455231a6-2301-4370-86c6-f824ff47ddf4" path="/var/lib/kubelet/pods/455231a6-2301-4370-86c6-f824ff47ddf4/volumes" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.635272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" event={"ID":"55051d12-ed8e-44e4-9d03-3699a2ae1cb5","Type":"ContainerStarted","Data":"50a326527fb1449c8bbc5f3b3e08a502d068c00c8a286b73aaf4b4a368a14bf5"} Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.635654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" event={"ID":"55051d12-ed8e-44e4-9d03-3699a2ae1cb5","Type":"ContainerStarted","Data":"a1951575f1f4598ed480e64c177eaab01cbaa98b27e8ee10aa83e6a7ad059952"} Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.635834 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.665929 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" podStartSLOduration=1.66590715 podStartE2EDuration="1.66590715s" podCreationTimestamp="2026-01-21 17:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:22:41.655125557 +0000 UTC m=+362.581256417" watchObservedRunningTime="2026-01-21 17:22:41.66590715 +0000 UTC m=+362.592038010" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.670015 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr"] Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.671157 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.672315 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.673626 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.674079 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.674181 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.674281 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.675583 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns"] Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.676772 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.679153 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.679564 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.680074 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.680271 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.683457 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr"] Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.685033 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.685469 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.687639 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.688390 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns"] Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.691530 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834207 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkr7c\" (UniqueName: \"kubernetes.io/projected/a10e7e74-df81-414c-9588-99ffa993d657-kube-api-access-pkr7c\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-client-ca\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834336 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-config\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834373 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10e7e74-df81-414c-9588-99ffa993d657-config\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834409 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-proxy-ca-bundles\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6338a6b3-0e94-4e3a-90f1-0416a9abceff-serving-cert\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834517 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfpm\" (UniqueName: \"kubernetes.io/projected/6338a6b3-0e94-4e3a-90f1-0416a9abceff-kube-api-access-7qfpm\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10e7e74-df81-414c-9588-99ffa993d657-client-ca\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.834626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10e7e74-df81-414c-9588-99ffa993d657-serving-cert\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.935601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-config\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.935690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10e7e74-df81-414c-9588-99ffa993d657-config\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.935755 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-proxy-ca-bundles\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.935784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6338a6b3-0e94-4e3a-90f1-0416a9abceff-serving-cert\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.935843 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfpm\" (UniqueName: \"kubernetes.io/projected/6338a6b3-0e94-4e3a-90f1-0416a9abceff-kube-api-access-7qfpm\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.935977 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10e7e74-df81-414c-9588-99ffa993d657-client-ca\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.936006 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10e7e74-df81-414c-9588-99ffa993d657-serving-cert\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.936284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkr7c\" (UniqueName: \"kubernetes.io/projected/a10e7e74-df81-414c-9588-99ffa993d657-kube-api-access-pkr7c\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.936314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-client-ca\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.937830 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-config\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.938564 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-client-ca\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.940120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10e7e74-df81-414c-9588-99ffa993d657-config\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.940740 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6338a6b3-0e94-4e3a-90f1-0416a9abceff-proxy-ca-bundles\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.941375 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10e7e74-df81-414c-9588-99ffa993d657-client-ca\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.949002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10e7e74-df81-414c-9588-99ffa993d657-serving-cert\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.949099 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6338a6b3-0e94-4e3a-90f1-0416a9abceff-serving-cert\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.958056 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkr7c\" (UniqueName: \"kubernetes.io/projected/a10e7e74-df81-414c-9588-99ffa993d657-kube-api-access-pkr7c\") pod \"route-controller-manager-6d744fcb79-dqxns\" (UID: \"a10e7e74-df81-414c-9588-99ffa993d657\") " pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:41 crc kubenswrapper[4823]: I0121 17:22:41.967710 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfpm\" (UniqueName: \"kubernetes.io/projected/6338a6b3-0e94-4e3a-90f1-0416a9abceff-kube-api-access-7qfpm\") pod \"controller-manager-5f58cc7dd9-wpkbr\" (UID: \"6338a6b3-0e94-4e3a-90f1-0416a9abceff\") " pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.003958 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.019153 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.225987 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr"] Jan 21 17:22:42 crc kubenswrapper[4823]: W0121 17:22:42.238882 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6338a6b3_0e94_4e3a_90f1_0416a9abceff.slice/crio-1999d104f0252994cf8e71dd0775db5671383ba108fa06a0f0aa6b605d948761 WatchSource:0}: Error finding container 1999d104f0252994cf8e71dd0775db5671383ba108fa06a0f0aa6b605d948761: Status 404 returned error can't find the container with id 1999d104f0252994cf8e71dd0775db5671383ba108fa06a0f0aa6b605d948761 Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.246184 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns"] Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.640111 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" event={"ID":"a10e7e74-df81-414c-9588-99ffa993d657","Type":"ContainerStarted","Data":"8435c3d1eee1be5b3956aafd16f9e949521e064d79116012353e56d0a24dba23"} Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.640426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" event={"ID":"a10e7e74-df81-414c-9588-99ffa993d657","Type":"ContainerStarted","Data":"7d5bf1ac33b4070912e334434f9bd77e13a907aabd3ae7e3f1b7ef2dd5a21a94"} Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.640968 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.642539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" event={"ID":"6338a6b3-0e94-4e3a-90f1-0416a9abceff","Type":"ContainerStarted","Data":"b6670ddd060b30f487128377eb465b3ce3b45d6b321c0f8eabb40487ae11869e"} Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.642561 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" event={"ID":"6338a6b3-0e94-4e3a-90f1-0416a9abceff","Type":"ContainerStarted","Data":"1999d104f0252994cf8e71dd0775db5671383ba108fa06a0f0aa6b605d948761"} Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.642807 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.647393 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.660531 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" podStartSLOduration=2.660514734 podStartE2EDuration="2.660514734s" podCreationTimestamp="2026-01-21 17:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:22:42.659413896 +0000 UTC m=+363.585544756" watchObservedRunningTime="2026-01-21 17:22:42.660514734 +0000 UTC m=+363.586645594" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.679341 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f58cc7dd9-wpkbr" podStartSLOduration=2.67932299 podStartE2EDuration="2.67932299s" podCreationTimestamp="2026-01-21 17:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:22:42.675308748 +0000 UTC m=+363.601439608" watchObservedRunningTime="2026-01-21 17:22:42.67932299 +0000 UTC m=+363.605453850" Jan 21 17:22:42 crc kubenswrapper[4823]: I0121 17:22:42.989287 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d744fcb79-dqxns" Jan 21 17:22:45 crc kubenswrapper[4823]: I0121 17:22:45.071063 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:45 crc kubenswrapper[4823]: I0121 17:22:45.071661 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:00 crc kubenswrapper[4823]: I0121 17:23:00.820350 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g4mrz" Jan 21 17:23:00 crc kubenswrapper[4823]: I0121 17:23:00.878619 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nrwt6"] Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.070242 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.070924 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.070981 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.071741 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b8c1a02b532d1e17f35eddd81a5df62e89e85b8c0ba2bf0662dc871e80c0ac4"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.071845 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://5b8c1a02b532d1e17f35eddd81a5df62e89e85b8c0ba2bf0662dc871e80c0ac4" gracePeriod=600 Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.830580 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="5b8c1a02b532d1e17f35eddd81a5df62e89e85b8c0ba2bf0662dc871e80c0ac4" exitCode=0 Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.830679 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"5b8c1a02b532d1e17f35eddd81a5df62e89e85b8c0ba2bf0662dc871e80c0ac4"} Jan 21 17:23:15 crc kubenswrapper[4823]: I0121 17:23:15.830951 4823 scope.go:117] "RemoveContainer" containerID="f2fa63b29075cce52d826a0452d415f7f53cefc1d76b6785efb24186669cdd04" Jan 21 17:23:16 crc kubenswrapper[4823]: I0121 17:23:16.840721 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"a5e3a96197b34c3415baab0fe948edc38ebee5874db835b6efcf008c938778e5"} Jan 21 17:23:25 crc kubenswrapper[4823]: I0121 17:23:25.918978 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" podUID="49ca3409-f1c5-41e7-aabe-3382b23fd48c" containerName="registry" containerID="cri-o://a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39" gracePeriod=30 Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.309688 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364267 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364617 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj8tf\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-kube-api-access-fj8tf\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364644 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-trusted-ca\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364699 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-tls\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364731 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49ca3409-f1c5-41e7-aabe-3382b23fd48c-installation-pull-secrets\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364762 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-bound-sa-token\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364793 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49ca3409-f1c5-41e7-aabe-3382b23fd48c-ca-trust-extracted\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.364814 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-certificates\") pod \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\" (UID: \"49ca3409-f1c5-41e7-aabe-3382b23fd48c\") " Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.366180 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.370200 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.373184 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca3409-f1c5-41e7-aabe-3382b23fd48c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.373187 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.374952 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.376602 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-kube-api-access-fj8tf" (OuterVolumeSpecName: "kube-api-access-fj8tf") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "kube-api-access-fj8tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.376732 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.386375 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ca3409-f1c5-41e7-aabe-3382b23fd48c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "49ca3409-f1c5-41e7-aabe-3382b23fd48c" (UID: "49ca3409-f1c5-41e7-aabe-3382b23fd48c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465710 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465747 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465760 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49ca3409-f1c5-41e7-aabe-3382b23fd48c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465769 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465779 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49ca3409-f1c5-41e7-aabe-3382b23fd48c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465787 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49ca3409-f1c5-41e7-aabe-3382b23fd48c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.465796 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj8tf\" (UniqueName: \"kubernetes.io/projected/49ca3409-f1c5-41e7-aabe-3382b23fd48c-kube-api-access-fj8tf\") on node \"crc\" DevicePath \"\"" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.901306 4823 generic.go:334] "Generic (PLEG): container finished" podID="49ca3409-f1c5-41e7-aabe-3382b23fd48c" containerID="a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39" exitCode=0 Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.901358 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" event={"ID":"49ca3409-f1c5-41e7-aabe-3382b23fd48c","Type":"ContainerDied","Data":"a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39"} Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.901384 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" event={"ID":"49ca3409-f1c5-41e7-aabe-3382b23fd48c","Type":"ContainerDied","Data":"20eee32bf2304c9f40fec3b7f73f38874598d01e1cf8f4acd773c2d981a493b7"} Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.901401 4823 scope.go:117] "RemoveContainer" containerID="a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.901444 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nrwt6" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.922965 4823 scope.go:117] "RemoveContainer" containerID="a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39" Jan 21 17:23:26 crc kubenswrapper[4823]: E0121 17:23:26.923567 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39\": container with ID starting with a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39 not found: ID does not exist" containerID="a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.923600 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39"} err="failed to get container status \"a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39\": rpc error: code = NotFound desc = could not find container \"a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39\": container with ID starting with a93e504b34514e911dcca923e4f9efeb135a5b85dab223eff599cab7c37bdc39 not found: ID does not exist" Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.947759 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nrwt6"] Jan 21 17:23:26 crc kubenswrapper[4823]: I0121 17:23:26.952765 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nrwt6"] Jan 21 17:23:27 crc kubenswrapper[4823]: I0121 17:23:27.350390 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ca3409-f1c5-41e7-aabe-3382b23fd48c" path="/var/lib/kubelet/pods/49ca3409-f1c5-41e7-aabe-3382b23fd48c/volumes" Jan 21 17:25:45 crc kubenswrapper[4823]: I0121 17:25:45.070234 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:25:45 crc kubenswrapper[4823]: I0121 17:25:45.070939 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:15 crc kubenswrapper[4823]: I0121 17:26:15.071079 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:26:15 crc kubenswrapper[4823]: I0121 17:26:15.071791 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:45 crc kubenswrapper[4823]: I0121 17:26:45.070970 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:26:45 crc kubenswrapper[4823]: I0121 17:26:45.071718 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:45 crc kubenswrapper[4823]: I0121 17:26:45.071813 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:26:45 crc kubenswrapper[4823]: I0121 17:26:45.072894 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5e3a96197b34c3415baab0fe948edc38ebee5874db835b6efcf008c938778e5"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:26:45 crc kubenswrapper[4823]: I0121 17:26:45.073014 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://a5e3a96197b34c3415baab0fe948edc38ebee5874db835b6efcf008c938778e5" gracePeriod=600 Jan 21 17:26:46 crc kubenswrapper[4823]: I0121 17:26:46.087439 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="a5e3a96197b34c3415baab0fe948edc38ebee5874db835b6efcf008c938778e5" exitCode=0 Jan 21 17:26:46 crc kubenswrapper[4823]: I0121 17:26:46.087510 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"a5e3a96197b34c3415baab0fe948edc38ebee5874db835b6efcf008c938778e5"} Jan 21 17:26:46 crc kubenswrapper[4823]: I0121 17:26:46.088895 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"01551426f5ae121d56576d8ebbe31d7f30aa5b3ef8f744c35e0d4bccddcab2b3"} Jan 21 17:26:46 crc kubenswrapper[4823]: I0121 17:26:46.088937 4823 scope.go:117] "RemoveContainer" containerID="5b8c1a02b532d1e17f35eddd81a5df62e89e85b8c0ba2bf0662dc871e80c0ac4" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.873603 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt"] Jan 21 17:27:04 crc kubenswrapper[4823]: E0121 17:27:04.874380 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ca3409-f1c5-41e7-aabe-3382b23fd48c" containerName="registry" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.874393 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ca3409-f1c5-41e7-aabe-3382b23fd48c" containerName="registry" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.874521 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ca3409-f1c5-41e7-aabe-3382b23fd48c" containerName="registry" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.874969 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.878460 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.878644 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4rvsh" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.879713 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.884729 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-rqrjt"] Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.885537 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rqrjt" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.894255 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gn5d2" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.900078 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt"] Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.902226 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rqrjt"] Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.923742 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vlzz\" (UniqueName: \"kubernetes.io/projected/594754ac-1219-4432-acb9-6f3bb802b248-kube-api-access-9vlzz\") pod \"cert-manager-cainjector-cf98fcc89-b7ddt\" (UID: \"594754ac-1219-4432-acb9-6f3bb802b248\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.934524 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r49b9"] Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.935665 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.938663 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7tfnt" Jan 21 17:27:04 crc kubenswrapper[4823]: I0121 17:27:04.938876 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r49b9"] Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.024600 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w8lp\" (UniqueName: \"kubernetes.io/projected/dffc6c84-eb72-4348-a370-ade90b81ea5c-kube-api-access-6w8lp\") pod \"cert-manager-webhook-687f57d79b-r49b9\" (UID: \"dffc6c84-eb72-4348-a370-ade90b81ea5c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.024955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vlzz\" (UniqueName: \"kubernetes.io/projected/594754ac-1219-4432-acb9-6f3bb802b248-kube-api-access-9vlzz\") pod \"cert-manager-cainjector-cf98fcc89-b7ddt\" (UID: \"594754ac-1219-4432-acb9-6f3bb802b248\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.025054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmt5\" (UniqueName: \"kubernetes.io/projected/419c5420-cb1c-474e-b276-67c540b41ec0-kube-api-access-8xmt5\") pod \"cert-manager-858654f9db-rqrjt\" (UID: \"419c5420-cb1c-474e-b276-67c540b41ec0\") " pod="cert-manager/cert-manager-858654f9db-rqrjt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.044019 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vlzz\" (UniqueName: \"kubernetes.io/projected/594754ac-1219-4432-acb9-6f3bb802b248-kube-api-access-9vlzz\") pod \"cert-manager-cainjector-cf98fcc89-b7ddt\" (UID: \"594754ac-1219-4432-acb9-6f3bb802b248\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.126005 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w8lp\" (UniqueName: \"kubernetes.io/projected/dffc6c84-eb72-4348-a370-ade90b81ea5c-kube-api-access-6w8lp\") pod \"cert-manager-webhook-687f57d79b-r49b9\" (UID: \"dffc6c84-eb72-4348-a370-ade90b81ea5c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.126064 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmt5\" (UniqueName: \"kubernetes.io/projected/419c5420-cb1c-474e-b276-67c540b41ec0-kube-api-access-8xmt5\") pod \"cert-manager-858654f9db-rqrjt\" (UID: \"419c5420-cb1c-474e-b276-67c540b41ec0\") " pod="cert-manager/cert-manager-858654f9db-rqrjt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.143557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmt5\" (UniqueName: \"kubernetes.io/projected/419c5420-cb1c-474e-b276-67c540b41ec0-kube-api-access-8xmt5\") pod \"cert-manager-858654f9db-rqrjt\" (UID: \"419c5420-cb1c-474e-b276-67c540b41ec0\") " pod="cert-manager/cert-manager-858654f9db-rqrjt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.144545 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w8lp\" (UniqueName: \"kubernetes.io/projected/dffc6c84-eb72-4348-a370-ade90b81ea5c-kube-api-access-6w8lp\") pod \"cert-manager-webhook-687f57d79b-r49b9\" (UID: \"dffc6c84-eb72-4348-a370-ade90b81ea5c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.207620 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.227631 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rqrjt" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.253268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.450037 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt"] Jan 21 17:27:05 crc kubenswrapper[4823]: W0121 17:27:05.464051 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod594754ac_1219_4432_acb9_6f3bb802b248.slice/crio-2ac498a443ef3ddcaf4864bcf815341f2b511f6544531193593cdc9e459ae08b WatchSource:0}: Error finding container 2ac498a443ef3ddcaf4864bcf815341f2b511f6544531193593cdc9e459ae08b: Status 404 returned error can't find the container with id 2ac498a443ef3ddcaf4864bcf815341f2b511f6544531193593cdc9e459ae08b Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.466578 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:27:05 crc kubenswrapper[4823]: W0121 17:27:05.487803 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod419c5420_cb1c_474e_b276_67c540b41ec0.slice/crio-f729dc50d8be9795e369d431035d7a0429975a20d8af172f9476e1bbab73bd50 WatchSource:0}: Error finding container f729dc50d8be9795e369d431035d7a0429975a20d8af172f9476e1bbab73bd50: Status 404 returned error can't find the container with id f729dc50d8be9795e369d431035d7a0429975a20d8af172f9476e1bbab73bd50 Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.491574 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rqrjt"] Jan 21 17:27:05 crc kubenswrapper[4823]: I0121 17:27:05.523468 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r49b9"] Jan 21 17:27:05 crc kubenswrapper[4823]: W0121 17:27:05.535177 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffc6c84_eb72_4348_a370_ade90b81ea5c.slice/crio-22dbf672bf5225d81252796921d6ef67e843ce5d39406ac175394630b382690a WatchSource:0}: Error finding container 22dbf672bf5225d81252796921d6ef67e843ce5d39406ac175394630b382690a: Status 404 returned error can't find the container with id 22dbf672bf5225d81252796921d6ef67e843ce5d39406ac175394630b382690a Jan 21 17:27:06 crc kubenswrapper[4823]: I0121 17:27:06.210079 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" event={"ID":"594754ac-1219-4432-acb9-6f3bb802b248","Type":"ContainerStarted","Data":"2ac498a443ef3ddcaf4864bcf815341f2b511f6544531193593cdc9e459ae08b"} Jan 21 17:27:06 crc kubenswrapper[4823]: I0121 17:27:06.215663 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rqrjt" event={"ID":"419c5420-cb1c-474e-b276-67c540b41ec0","Type":"ContainerStarted","Data":"f729dc50d8be9795e369d431035d7a0429975a20d8af172f9476e1bbab73bd50"} Jan 21 17:27:06 crc kubenswrapper[4823]: I0121 17:27:06.217404 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" event={"ID":"dffc6c84-eb72-4348-a370-ade90b81ea5c","Type":"ContainerStarted","Data":"22dbf672bf5225d81252796921d6ef67e843ce5d39406ac175394630b382690a"} Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.272724 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" event={"ID":"dffc6c84-eb72-4348-a370-ade90b81ea5c","Type":"ContainerStarted","Data":"1dcf08eff24d61a5f622f4f894e6fc1f175ecfa9d6dcb440a2e507ff8add2889"} Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.274220 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.274287 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rqrjt" event={"ID":"419c5420-cb1c-474e-b276-67c540b41ec0","Type":"ContainerStarted","Data":"21353c7cef661b72cda8772084d11754b90ee6ff5205fb238b5553b2640f3e5d"} Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.276394 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" event={"ID":"594754ac-1219-4432-acb9-6f3bb802b248","Type":"ContainerStarted","Data":"2aa9c787540efc0dd8d9fb89bcd7effa4f61abdc1969da88acf87498b7349f39"} Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.289368 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" podStartSLOduration=2.315291006 podStartE2EDuration="10.289335208s" podCreationTimestamp="2026-01-21 17:27:04 +0000 UTC" firstStartedPulling="2026-01-21 17:27:05.536626356 +0000 UTC m=+626.462757216" lastFinishedPulling="2026-01-21 17:27:13.510670558 +0000 UTC m=+634.436801418" observedRunningTime="2026-01-21 17:27:14.287663556 +0000 UTC m=+635.213794466" watchObservedRunningTime="2026-01-21 17:27:14.289335208 +0000 UTC m=+635.215466108" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.311199 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-rqrjt" podStartSLOduration=2.360276838 podStartE2EDuration="10.311168642s" podCreationTimestamp="2026-01-21 17:27:04 +0000 UTC" firstStartedPulling="2026-01-21 17:27:05.490194318 +0000 UTC m=+626.416325188" lastFinishedPulling="2026-01-21 17:27:13.441086102 +0000 UTC m=+634.367216992" observedRunningTime="2026-01-21 17:27:14.306566376 +0000 UTC m=+635.232697256" watchObservedRunningTime="2026-01-21 17:27:14.311168642 +0000 UTC m=+635.237299542" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.324763 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b7ddt" podStartSLOduration=2.351522186 podStartE2EDuration="10.324733427s" podCreationTimestamp="2026-01-21 17:27:04 +0000 UTC" firstStartedPulling="2026-01-21 17:27:05.466297541 +0000 UTC m=+626.392428401" lastFinishedPulling="2026-01-21 17:27:13.439508772 +0000 UTC m=+634.365639642" observedRunningTime="2026-01-21 17:27:14.324005388 +0000 UTC m=+635.250136258" watchObservedRunningTime="2026-01-21 17:27:14.324733427 +0000 UTC m=+635.250864337" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.485880 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7q2df"] Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486244 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-controller" containerID="cri-o://603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486361 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-node" containerID="cri-o://66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486405 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-acl-logging" containerID="cri-o://dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486508 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486619 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="nbdb" containerID="cri-o://6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486516 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="northd" containerID="cri-o://78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.486520 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="sbdb" containerID="cri-o://68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.510776 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" containerID="cri-o://127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" gracePeriod=30 Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.786101 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/3.log" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.788247 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovn-acl-logging/0.log" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.788778 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovn-controller/0.log" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.789280 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839353 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9hgl"] Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839585 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839600 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839610 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-node" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839618 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-node" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839628 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839635 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839645 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kubecfg-setup" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839654 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kubecfg-setup" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839666 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="nbdb" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839673 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="nbdb" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839685 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839692 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839705 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="sbdb" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839712 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="sbdb" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839726 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="northd" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839734 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="northd" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839744 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839751 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839760 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839768 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839776 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839784 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.839795 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-acl-logging" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839802 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-acl-logging" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839945 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839959 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-node" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839969 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="northd" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839977 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-acl-logging" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.839987 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840000 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840009 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="sbdb" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840018 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840029 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovn-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840036 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="nbdb" Jan 21 17:27:14 crc kubenswrapper[4823]: E0121 17:27:14.840160 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840171 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840289 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.840508 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerName="ovnkube-controller" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.842008 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851648 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-script-lib\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-systemd\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851728 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-log-socket\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-netd\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851777 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851803 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-openvswitch\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851817 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-log-socket" (OuterVolumeSpecName: "log-socket") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851830 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-bin\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851866 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851867 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-kubelet\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851894 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851895 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-env-overrides\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851910 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851924 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-ovn\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851943 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-var-lib-openvswitch\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851947 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851978 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-ovn-kubernetes\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.851997 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852000 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2fdd\" (UniqueName: \"kubernetes.io/projected/b5f1d66f-b00f-4e75-8130-43977e13eec8-kube-api-access-t2fdd\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852061 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovn-node-metrics-cert\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852093 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-netns\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852137 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-node-log\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-systemd-units\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852189 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-slash\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852234 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852269 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-etc-openvswitch\") pod \"b5f1d66f-b00f-4e75-8130-43977e13eec8\" (UID: \"b5f1d66f-b00f-4e75-8130-43977e13eec8\") " Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852570 4823 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852587 4823 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852236 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852256 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852274 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852292 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-node-log" (OuterVolumeSpecName: "node-log") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852366 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852425 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-slash" (OuterVolumeSpecName: "host-slash") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852453 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852593 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.852931 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.854192 4823 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.854220 4823 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.854233 4823 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.854269 4823 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.857169 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.857474 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f1d66f-b00f-4e75-8130-43977e13eec8-kube-api-access-t2fdd" (OuterVolumeSpecName: "kube-api-access-t2fdd") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "kube-api-access-t2fdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.864590 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b5f1d66f-b00f-4e75-8130-43977e13eec8" (UID: "b5f1d66f-b00f-4e75-8130-43977e13eec8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955086 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-ovn\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955139 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovnkube-script-lib\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955176 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-slash\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlgn\" (UniqueName: \"kubernetes.io/projected/f0be6266-4117-47f2-9a1d-a973bab3dee8-kube-api-access-vhlgn\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-log-socket\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-cni-netd\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955288 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovnkube-config\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955384 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-cni-bin\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955436 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-systemd\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-kubelet\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-run-netns\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955540 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-env-overrides\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955557 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955605 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-node-log\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-etc-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955639 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955660 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-systemd-units\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-var-lib-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955700 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955748 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovn-node-metrics-cert\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955794 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955804 4823 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955813 4823 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955822 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2fdd\" (UniqueName: \"kubernetes.io/projected/b5f1d66f-b00f-4e75-8130-43977e13eec8-kube-api-access-t2fdd\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955830 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955839 4823 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955867 4823 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955877 4823 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955886 4823 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955893 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955902 4823 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955910 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b5f1d66f-b00f-4e75-8130-43977e13eec8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955918 4823 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:14 crc kubenswrapper[4823]: I0121 17:27:14.955927 4823 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f1d66f-b00f-4e75-8130-43977e13eec8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057022 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-kubelet\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057100 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-run-netns\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057124 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-env-overrides\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057153 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-node-log\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057208 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-etc-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057230 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-systemd-units\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-var-lib-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057309 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057346 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovn-node-metrics-cert\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057370 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-ovn\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057390 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovnkube-script-lib\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-slash\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057438 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlgn\" (UniqueName: \"kubernetes.io/projected/f0be6266-4117-47f2-9a1d-a973bab3dee8-kube-api-access-vhlgn\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-log-socket\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-cni-netd\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057515 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovnkube-config\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-cni-bin\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057565 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-systemd\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-var-lib-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-kubelet\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.057726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-run-netns\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058003 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-slash\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-cni-netd\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-etc-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058093 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-openvswitch\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058119 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058164 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-node-log\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058202 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-ovn\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058362 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-run-systemd\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-systemd-units\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-log-socket\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058493 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0be6266-4117-47f2-9a1d-a973bab3dee8-host-cni-bin\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058368 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-env-overrides\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058815 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovnkube-script-lib\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.058999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovnkube-config\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.062004 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f0be6266-4117-47f2-9a1d-a973bab3dee8-ovn-node-metrics-cert\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.074224 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlgn\" (UniqueName: \"kubernetes.io/projected/f0be6266-4117-47f2-9a1d-a973bab3dee8-kube-api-access-vhlgn\") pod \"ovnkube-node-j9hgl\" (UID: \"f0be6266-4117-47f2-9a1d-a973bab3dee8\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.155368 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:15 crc kubenswrapper[4823]: W0121 17:27:15.175462 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0be6266_4117_47f2_9a1d_a973bab3dee8.slice/crio-e6d3195660829da2cb0e0e2bd7e20f3e72b589eb732dfd1bc2ac889881f398b1 WatchSource:0}: Error finding container e6d3195660829da2cb0e0e2bd7e20f3e72b589eb732dfd1bc2ac889881f398b1: Status 404 returned error can't find the container with id e6d3195660829da2cb0e0e2bd7e20f3e72b589eb732dfd1bc2ac889881f398b1 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.283390 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovnkube-controller/3.log" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.287923 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovn-acl-logging/0.log" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288396 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7q2df_b5f1d66f-b00f-4e75-8130-43977e13eec8/ovn-controller/0.log" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288931 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" exitCode=0 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288952 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" exitCode=0 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288961 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" exitCode=0 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288970 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" exitCode=0 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288977 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" exitCode=0 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288985 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" exitCode=0 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288991 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" exitCode=143 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.288998 4823 generic.go:334] "Generic (PLEG): container finished" podID="b5f1d66f-b00f-4e75-8130-43977e13eec8" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" exitCode=143 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289043 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289092 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289106 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289130 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289142 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289153 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289168 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289176 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289183 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289189 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289195 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289202 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289208 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289215 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289223 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289158 4823 scope.go:117] "RemoveContainer" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289236 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289323 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289332 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289339 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289346 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289355 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289362 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289369 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289376 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289383 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289392 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289404 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289412 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289419 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289425 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289432 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289439 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289446 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289453 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289460 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289467 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7q2df" event={"ID":"b5f1d66f-b00f-4e75-8130-43977e13eec8","Type":"ContainerDied","Data":"41e5b277bfbf6f484e84e0d754acadb7975d8cf7c4559fbf042f2550b7f9179d"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289485 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289493 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289500 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289507 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289513 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289520 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289527 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289534 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289542 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.289549 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.292118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"08530771e0ec031c0c17309e92d718786ec9534f8d9482dbf1bd34aa21509291"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.292154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"e6d3195660829da2cb0e0e2bd7e20f3e72b589eb732dfd1bc2ac889881f398b1"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.296494 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/2.log" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.298114 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/1.log" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.298154 4823 generic.go:334] "Generic (PLEG): container finished" podID="ea8699bd-e53a-443e-b2e5-0fe577f2c19f" containerID="3d8c899ad3979e18f7a6f8a0287b50cebebda6042365ff774c1cd367e9563469" exitCode=2 Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.298246 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerDied","Data":"3d8c899ad3979e18f7a6f8a0287b50cebebda6042365ff774c1cd367e9563469"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.298325 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a"} Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.298951 4823 scope.go:117] "RemoveContainer" containerID="3d8c899ad3979e18f7a6f8a0287b50cebebda6042365ff774c1cd367e9563469" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.299156 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-skvzm_openshift-multus(ea8699bd-e53a-443e-b2e5-0fe577f2c19f)\"" pod="openshift-multus/multus-skvzm" podUID="ea8699bd-e53a-443e-b2e5-0fe577f2c19f" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.354498 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.396152 4823 scope.go:117] "RemoveContainer" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.397348 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7q2df"] Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.401867 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7q2df"] Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.415376 4823 scope.go:117] "RemoveContainer" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.428583 4823 scope.go:117] "RemoveContainer" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.446520 4823 scope.go:117] "RemoveContainer" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.463522 4823 scope.go:117] "RemoveContainer" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.477012 4823 scope.go:117] "RemoveContainer" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.491784 4823 scope.go:117] "RemoveContainer" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.514290 4823 scope.go:117] "RemoveContainer" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.528686 4823 scope.go:117] "RemoveContainer" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.529040 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": container with ID starting with 127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd not found: ID does not exist" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529069 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} err="failed to get container status \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": rpc error: code = NotFound desc = could not find container \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": container with ID starting with 127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529090 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.529259 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": container with ID starting with 896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151 not found: ID does not exist" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529290 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} err="failed to get container status \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": rpc error: code = NotFound desc = could not find container \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": container with ID starting with 896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529313 4823 scope.go:117] "RemoveContainer" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.529494 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": container with ID starting with 68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe not found: ID does not exist" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529516 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} err="failed to get container status \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": rpc error: code = NotFound desc = could not find container \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": container with ID starting with 68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529529 4823 scope.go:117] "RemoveContainer" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.529717 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": container with ID starting with 6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef not found: ID does not exist" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529735 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} err="failed to get container status \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": rpc error: code = NotFound desc = could not find container \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": container with ID starting with 6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.529798 4823 scope.go:117] "RemoveContainer" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.529976 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": container with ID starting with 78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07 not found: ID does not exist" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530001 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} err="failed to get container status \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": rpc error: code = NotFound desc = could not find container \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": container with ID starting with 78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530030 4823 scope.go:117] "RemoveContainer" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.530184 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": container with ID starting with 2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996 not found: ID does not exist" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530208 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} err="failed to get container status \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": rpc error: code = NotFound desc = could not find container \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": container with ID starting with 2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530225 4823 scope.go:117] "RemoveContainer" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.530375 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": container with ID starting with 66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7 not found: ID does not exist" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530448 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} err="failed to get container status \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": rpc error: code = NotFound desc = could not find container \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": container with ID starting with 66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530462 4823 scope.go:117] "RemoveContainer" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.530611 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": container with ID starting with dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3 not found: ID does not exist" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530628 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} err="failed to get container status \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": rpc error: code = NotFound desc = could not find container \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": container with ID starting with dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530639 4823 scope.go:117] "RemoveContainer" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.530800 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": container with ID starting with 603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b not found: ID does not exist" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530817 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} err="failed to get container status \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": rpc error: code = NotFound desc = could not find container \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": container with ID starting with 603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.530828 4823 scope.go:117] "RemoveContainer" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" Jan 21 17:27:15 crc kubenswrapper[4823]: E0121 17:27:15.531345 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": container with ID starting with 34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db not found: ID does not exist" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531367 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} err="failed to get container status \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": rpc error: code = NotFound desc = could not find container \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": container with ID starting with 34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531383 4823 scope.go:117] "RemoveContainer" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531530 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} err="failed to get container status \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": rpc error: code = NotFound desc = could not find container \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": container with ID starting with 127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531545 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531680 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} err="failed to get container status \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": rpc error: code = NotFound desc = could not find container \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": container with ID starting with 896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531695 4823 scope.go:117] "RemoveContainer" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531837 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} err="failed to get container status \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": rpc error: code = NotFound desc = could not find container \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": container with ID starting with 68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.531863 4823 scope.go:117] "RemoveContainer" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532016 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} err="failed to get container status \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": rpc error: code = NotFound desc = could not find container \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": container with ID starting with 6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532032 4823 scope.go:117] "RemoveContainer" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532165 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} err="failed to get container status \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": rpc error: code = NotFound desc = could not find container \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": container with ID starting with 78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532180 4823 scope.go:117] "RemoveContainer" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532315 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} err="failed to get container status \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": rpc error: code = NotFound desc = could not find container \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": container with ID starting with 2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532330 4823 scope.go:117] "RemoveContainer" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532473 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} err="failed to get container status \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": rpc error: code = NotFound desc = could not find container \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": container with ID starting with 66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532489 4823 scope.go:117] "RemoveContainer" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532650 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} err="failed to get container status \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": rpc error: code = NotFound desc = could not find container \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": container with ID starting with dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532665 4823 scope.go:117] "RemoveContainer" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532823 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} err="failed to get container status \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": rpc error: code = NotFound desc = could not find container \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": container with ID starting with 603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532837 4823 scope.go:117] "RemoveContainer" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.532994 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} err="failed to get container status \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": rpc error: code = NotFound desc = could not find container \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": container with ID starting with 34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533008 4823 scope.go:117] "RemoveContainer" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533205 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} err="failed to get container status \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": rpc error: code = NotFound desc = could not find container \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": container with ID starting with 127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533219 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533393 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} err="failed to get container status \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": rpc error: code = NotFound desc = could not find container \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": container with ID starting with 896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533409 4823 scope.go:117] "RemoveContainer" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533676 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} err="failed to get container status \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": rpc error: code = NotFound desc = could not find container \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": container with ID starting with 68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533717 4823 scope.go:117] "RemoveContainer" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533972 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} err="failed to get container status \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": rpc error: code = NotFound desc = could not find container \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": container with ID starting with 6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.533997 4823 scope.go:117] "RemoveContainer" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534228 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} err="failed to get container status \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": rpc error: code = NotFound desc = could not find container \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": container with ID starting with 78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534245 4823 scope.go:117] "RemoveContainer" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534502 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} err="failed to get container status \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": rpc error: code = NotFound desc = could not find container \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": container with ID starting with 2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534523 4823 scope.go:117] "RemoveContainer" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534707 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} err="failed to get container status \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": rpc error: code = NotFound desc = could not find container \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": container with ID starting with 66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534726 4823 scope.go:117] "RemoveContainer" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534907 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} err="failed to get container status \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": rpc error: code = NotFound desc = could not find container \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": container with ID starting with dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.534960 4823 scope.go:117] "RemoveContainer" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535110 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} err="failed to get container status \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": rpc error: code = NotFound desc = could not find container \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": container with ID starting with 603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535124 4823 scope.go:117] "RemoveContainer" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535330 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} err="failed to get container status \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": rpc error: code = NotFound desc = could not find container \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": container with ID starting with 34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535375 4823 scope.go:117] "RemoveContainer" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535561 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} err="failed to get container status \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": rpc error: code = NotFound desc = could not find container \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": container with ID starting with 127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535586 4823 scope.go:117] "RemoveContainer" containerID="896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535958 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151"} err="failed to get container status \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": rpc error: code = NotFound desc = could not find container \"896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151\": container with ID starting with 896d3310211c1051a703cecf77a714d26fa5dac5bab2d42156660132896e6151 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.535979 4823 scope.go:117] "RemoveContainer" containerID="68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.536162 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe"} err="failed to get container status \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": rpc error: code = NotFound desc = could not find container \"68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe\": container with ID starting with 68843dcd1042ccdb42117765acccafa272a9b08df39b8359b2ca58606e25e2fe not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.536183 4823 scope.go:117] "RemoveContainer" containerID="6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.536427 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef"} err="failed to get container status \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": rpc error: code = NotFound desc = could not find container \"6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef\": container with ID starting with 6d77b4415db07f872a900673a7018016ab3183170b950c71bf298b78e750c3ef not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.536448 4823 scope.go:117] "RemoveContainer" containerID="78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.536833 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07"} err="failed to get container status \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": rpc error: code = NotFound desc = could not find container \"78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07\": container with ID starting with 78a26e83d97fa3201d6f12202bfba9218f73121a1f1b8b17608611d13f54cd07 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.536867 4823 scope.go:117] "RemoveContainer" containerID="2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537092 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996"} err="failed to get container status \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": rpc error: code = NotFound desc = could not find container \"2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996\": container with ID starting with 2920b3c44b3b1df5341803a124b330737c1e4d8c89d75a93f188317b89254996 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537112 4823 scope.go:117] "RemoveContainer" containerID="66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537286 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7"} err="failed to get container status \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": rpc error: code = NotFound desc = could not find container \"66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7\": container with ID starting with 66b564e1d5e2dec9e59a3ac9e8903e9d4629e3ef8c55f23c5b9069e8310180a7 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537306 4823 scope.go:117] "RemoveContainer" containerID="dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537461 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3"} err="failed to get container status \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": rpc error: code = NotFound desc = could not find container \"dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3\": container with ID starting with dc01a39b71476c168d45b056638cfb9ef7362dc9bf350bc949a9be5dc1f720a3 not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537482 4823 scope.go:117] "RemoveContainer" containerID="603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537752 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b"} err="failed to get container status \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": rpc error: code = NotFound desc = could not find container \"603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b\": container with ID starting with 603ca681f779222459e937b3de22fcd83166d1ec4ab3977da1a790c56bb7b98b not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.537770 4823 scope.go:117] "RemoveContainer" containerID="34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.543079 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db"} err="failed to get container status \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": rpc error: code = NotFound desc = could not find container \"34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db\": container with ID starting with 34baf16cf25eca1287b42c63bc6c452cf408fbb692c3107080e2c0f6054487db not found: ID does not exist" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.543104 4823 scope.go:117] "RemoveContainer" containerID="127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd" Jan 21 17:27:15 crc kubenswrapper[4823]: I0121 17:27:15.543485 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd"} err="failed to get container status \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": rpc error: code = NotFound desc = could not find container \"127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd\": container with ID starting with 127c87d39080f528bc97489140c0444bf047b825445ae4bdb534c178ff1202cd not found: ID does not exist" Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.307623 4823 generic.go:334] "Generic (PLEG): container finished" podID="f0be6266-4117-47f2-9a1d-a973bab3dee8" containerID="08530771e0ec031c0c17309e92d718786ec9534f8d9482dbf1bd34aa21509291" exitCode=0 Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.307704 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerDied","Data":"08530771e0ec031c0c17309e92d718786ec9534f8d9482dbf1bd34aa21509291"} Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.308316 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"4e3a1bcc8ab8410a1f8f31f32e0c0b07d6ef361200e0951921c9530c031803a4"} Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.308354 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"bf0b31f9d068e3dff97fa32f4e05a805a124f9c1d30f9e3b35ee1f3cacaec8e4"} Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.308371 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"067f0a4be05299012186cf7b2072283354f5c26621cbb1c2f82b9a535ccb5eca"} Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.308386 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"c7f2976cc42c0e3ab284017d79cbe66baaa521647f2d14ce8eb7bc206e44fba1"} Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.308401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"6113323b5b4ff9902d003d9677607bda12ae721a94904fa265d64be22a35f0c0"} Jan 21 17:27:16 crc kubenswrapper[4823]: I0121 17:27:16.308414 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"af3ce1f0d6c982ba53fd71140b32e82e89b2b69cebd120fdcd01af94d2ac07a4"} Jan 21 17:27:17 crc kubenswrapper[4823]: I0121 17:27:17.351564 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f1d66f-b00f-4e75-8130-43977e13eec8" path="/var/lib/kubelet/pods/b5f1d66f-b00f-4e75-8130-43977e13eec8/volumes" Jan 21 17:27:19 crc kubenswrapper[4823]: I0121 17:27:19.334381 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"668457cff3272442f2fe73b314e17fc0a5bb749543cd371fd08a6c037f244a81"} Jan 21 17:27:20 crc kubenswrapper[4823]: I0121 17:27:20.257540 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-r49b9" Jan 21 17:27:21 crc kubenswrapper[4823]: I0121 17:27:21.350138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" event={"ID":"f0be6266-4117-47f2-9a1d-a973bab3dee8","Type":"ContainerStarted","Data":"a01d263bded8e6d465bdc302dd751e88e87337feeebc928197142fbc9402a0f7"} Jan 21 17:27:21 crc kubenswrapper[4823]: I0121 17:27:21.350513 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:21 crc kubenswrapper[4823]: I0121 17:27:21.350530 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:21 crc kubenswrapper[4823]: I0121 17:27:21.376174 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" podStartSLOduration=7.376160774 podStartE2EDuration="7.376160774s" podCreationTimestamp="2026-01-21 17:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:27:21.373527137 +0000 UTC m=+642.299658017" watchObservedRunningTime="2026-01-21 17:27:21.376160774 +0000 UTC m=+642.302291634" Jan 21 17:27:21 crc kubenswrapper[4823]: I0121 17:27:21.378766 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:22 crc kubenswrapper[4823]: I0121 17:27:22.354410 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:22 crc kubenswrapper[4823]: I0121 17:27:22.415609 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:27 crc kubenswrapper[4823]: I0121 17:27:27.343543 4823 scope.go:117] "RemoveContainer" containerID="3d8c899ad3979e18f7a6f8a0287b50cebebda6042365ff774c1cd367e9563469" Jan 21 17:27:27 crc kubenswrapper[4823]: E0121 17:27:27.344084 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-skvzm_openshift-multus(ea8699bd-e53a-443e-b2e5-0fe577f2c19f)\"" pod="openshift-multus/multus-skvzm" podUID="ea8699bd-e53a-443e-b2e5-0fe577f2c19f" Jan 21 17:27:39 crc kubenswrapper[4823]: I0121 17:27:39.572465 4823 scope.go:117] "RemoveContainer" containerID="49850e8eb3d228e94973b646f5a078c4b6d2da5ac66388deb43166d2966ff40a" Jan 21 17:27:41 crc kubenswrapper[4823]: I0121 17:27:41.472948 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/2.log" Jan 21 17:27:42 crc kubenswrapper[4823]: I0121 17:27:42.344308 4823 scope.go:117] "RemoveContainer" containerID="3d8c899ad3979e18f7a6f8a0287b50cebebda6042365ff774c1cd367e9563469" Jan 21 17:27:43 crc kubenswrapper[4823]: I0121 17:27:43.492801 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-skvzm_ea8699bd-e53a-443e-b2e5-0fe577f2c19f/kube-multus/2.log" Jan 21 17:27:43 crc kubenswrapper[4823]: I0121 17:27:43.493460 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-skvzm" event={"ID":"ea8699bd-e53a-443e-b2e5-0fe577f2c19f","Type":"ContainerStarted","Data":"b56151ae83a70ba469e397054bd3b0645818557042bd62c8dac046bfd219ace1"} Jan 21 17:27:45 crc kubenswrapper[4823]: I0121 17:27:45.179486 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9hgl" Jan 21 17:27:53 crc kubenswrapper[4823]: I0121 17:27:53.960544 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn"] Jan 21 17:27:53 crc kubenswrapper[4823]: I0121 17:27:53.961948 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:53 crc kubenswrapper[4823]: I0121 17:27:53.963923 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 17:27:53 crc kubenswrapper[4823]: I0121 17:27:53.972779 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn"] Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.077655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.077736 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkb6d\" (UniqueName: \"kubernetes.io/projected/48b0d529-dbd0-4203-8af0-e7f42be18789-kube-api-access-wkb6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.077929 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.180124 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.180216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkb6d\" (UniqueName: \"kubernetes.io/projected/48b0d529-dbd0-4203-8af0-e7f42be18789-kube-api-access-wkb6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.180289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.180808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.180839 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.211023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkb6d\" (UniqueName: \"kubernetes.io/projected/48b0d529-dbd0-4203-8af0-e7f42be18789-kube-api-access-wkb6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.281572 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.486020 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn"] Jan 21 17:27:54 crc kubenswrapper[4823]: I0121 17:27:54.559323 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" event={"ID":"48b0d529-dbd0-4203-8af0-e7f42be18789","Type":"ContainerStarted","Data":"ba3636945cef483043ee16ee053fc8a1aaca37c484ae21ab8f8402372afa7b43"} Jan 21 17:27:55 crc kubenswrapper[4823]: I0121 17:27:55.566022 4823 generic.go:334] "Generic (PLEG): container finished" podID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerID="8a0166b3fc3c9e47183db89c61e1325c9ef9b459e7800a4a294ac5a644ac97a4" exitCode=0 Jan 21 17:27:55 crc kubenswrapper[4823]: I0121 17:27:55.566075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" event={"ID":"48b0d529-dbd0-4203-8af0-e7f42be18789","Type":"ContainerDied","Data":"8a0166b3fc3c9e47183db89c61e1325c9ef9b459e7800a4a294ac5a644ac97a4"} Jan 21 17:27:57 crc kubenswrapper[4823]: I0121 17:27:57.587048 4823 generic.go:334] "Generic (PLEG): container finished" podID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerID="d05977fee00a110e152def3a9f591551e9f321e8308d30aed2a2b680171197e5" exitCode=0 Jan 21 17:27:57 crc kubenswrapper[4823]: I0121 17:27:57.587180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" event={"ID":"48b0d529-dbd0-4203-8af0-e7f42be18789","Type":"ContainerDied","Data":"d05977fee00a110e152def3a9f591551e9f321e8308d30aed2a2b680171197e5"} Jan 21 17:27:58 crc kubenswrapper[4823]: I0121 17:27:58.599445 4823 generic.go:334] "Generic (PLEG): container finished" podID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerID="2ac38f08e79040d7a52d356808a1068f997486e69cc593accd7733d4e6f0b58e" exitCode=0 Jan 21 17:27:58 crc kubenswrapper[4823]: I0121 17:27:58.599524 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" event={"ID":"48b0d529-dbd0-4203-8af0-e7f42be18789","Type":"ContainerDied","Data":"2ac38f08e79040d7a52d356808a1068f997486e69cc593accd7733d4e6f0b58e"} Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.832830 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.854351 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-bundle\") pod \"48b0d529-dbd0-4203-8af0-e7f42be18789\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.854406 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-util\") pod \"48b0d529-dbd0-4203-8af0-e7f42be18789\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.854500 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkb6d\" (UniqueName: \"kubernetes.io/projected/48b0d529-dbd0-4203-8af0-e7f42be18789-kube-api-access-wkb6d\") pod \"48b0d529-dbd0-4203-8af0-e7f42be18789\" (UID: \"48b0d529-dbd0-4203-8af0-e7f42be18789\") " Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.858194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-bundle" (OuterVolumeSpecName: "bundle") pod "48b0d529-dbd0-4203-8af0-e7f42be18789" (UID: "48b0d529-dbd0-4203-8af0-e7f42be18789"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.861905 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b0d529-dbd0-4203-8af0-e7f42be18789-kube-api-access-wkb6d" (OuterVolumeSpecName: "kube-api-access-wkb6d") pod "48b0d529-dbd0-4203-8af0-e7f42be18789" (UID: "48b0d529-dbd0-4203-8af0-e7f42be18789"). InnerVolumeSpecName "kube-api-access-wkb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.877590 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-util" (OuterVolumeSpecName: "util") pod "48b0d529-dbd0-4203-8af0-e7f42be18789" (UID: "48b0d529-dbd0-4203-8af0-e7f42be18789"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.955929 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkb6d\" (UniqueName: \"kubernetes.io/projected/48b0d529-dbd0-4203-8af0-e7f42be18789-kube-api-access-wkb6d\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.955986 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:27:59 crc kubenswrapper[4823]: I0121 17:27:59.956007 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48b0d529-dbd0-4203-8af0-e7f42be18789-util\") on node \"crc\" DevicePath \"\"" Jan 21 17:28:00 crc kubenswrapper[4823]: I0121 17:28:00.611507 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" event={"ID":"48b0d529-dbd0-4203-8af0-e7f42be18789","Type":"ContainerDied","Data":"ba3636945cef483043ee16ee053fc8a1aaca37c484ae21ab8f8402372afa7b43"} Jan 21 17:28:00 crc kubenswrapper[4823]: I0121 17:28:00.611551 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3636945cef483043ee16ee053fc8a1aaca37c484ae21ab8f8402372afa7b43" Jan 21 17:28:00 crc kubenswrapper[4823]: I0121 17:28:00.611605 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.604324 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj"] Jan 21 17:28:11 crc kubenswrapper[4823]: E0121 17:28:11.605213 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="util" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.605228 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="util" Jan 21 17:28:11 crc kubenswrapper[4823]: E0121 17:28:11.605241 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="pull" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.605249 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="pull" Jan 21 17:28:11 crc kubenswrapper[4823]: E0121 17:28:11.605267 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="extract" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.605275 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="extract" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.605389 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b0d529-dbd0-4203-8af0-e7f42be18789" containerName="extract" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.605905 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.608131 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.608584 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.608826 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-6spth" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.625968 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj"] Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.702987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc2b6\" (UniqueName: \"kubernetes.io/projected/654af63c-1337-4345-a2d6-4aa64462e8a9-kube-api-access-sc2b6\") pod \"obo-prometheus-operator-68bc856cb9-vmjmj\" (UID: \"654af63c-1337-4345-a2d6-4aa64462e8a9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.804147 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc2b6\" (UniqueName: \"kubernetes.io/projected/654af63c-1337-4345-a2d6-4aa64462e8a9-kube-api-access-sc2b6\") pod \"obo-prometheus-operator-68bc856cb9-vmjmj\" (UID: \"654af63c-1337-4345-a2d6-4aa64462e8a9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.848150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc2b6\" (UniqueName: \"kubernetes.io/projected/654af63c-1337-4345-a2d6-4aa64462e8a9-kube-api-access-sc2b6\") pod \"obo-prometheus-operator-68bc856cb9-vmjmj\" (UID: \"654af63c-1337-4345-a2d6-4aa64462e8a9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.925798 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.949389 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p"] Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.950812 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.953375 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fcjpk" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.955654 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.969615 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p"] Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.974193 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f"] Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.975244 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:11 crc kubenswrapper[4823]: I0121 17:28:11.992236 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.007371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52f5c7d9-13fa-4e74-be3b-d4aee174b931-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f\" (UID: \"52f5c7d9-13fa-4e74-be3b-d4aee174b931\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.007439 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/497404e0-8944-46a8-9d67-a4334950f54c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p\" (UID: \"497404e0-8944-46a8-9d67-a4334950f54c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.007470 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52f5c7d9-13fa-4e74-be3b-d4aee174b931-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f\" (UID: \"52f5c7d9-13fa-4e74-be3b-d4aee174b931\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.007508 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/497404e0-8944-46a8-9d67-a4334950f54c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p\" (UID: \"497404e0-8944-46a8-9d67-a4334950f54c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.073567 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l9hhp"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.074512 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.076819 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rgkgh" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.076939 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.097478 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l9hhp"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.111434 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/497404e0-8944-46a8-9d67-a4334950f54c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p\" (UID: \"497404e0-8944-46a8-9d67-a4334950f54c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.111507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52f5c7d9-13fa-4e74-be3b-d4aee174b931-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f\" (UID: \"52f5c7d9-13fa-4e74-be3b-d4aee174b931\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.111538 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e55b3a4-cfae-41a1-a153-611e2e96dc75-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l9hhp\" (UID: \"7e55b3a4-cfae-41a1-a153-611e2e96dc75\") " pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.111564 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/497404e0-8944-46a8-9d67-a4334950f54c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p\" (UID: \"497404e0-8944-46a8-9d67-a4334950f54c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.111606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bnt\" (UniqueName: \"kubernetes.io/projected/7e55b3a4-cfae-41a1-a153-611e2e96dc75-kube-api-access-b8bnt\") pod \"observability-operator-59bdc8b94-l9hhp\" (UID: \"7e55b3a4-cfae-41a1-a153-611e2e96dc75\") " pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.111624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52f5c7d9-13fa-4e74-be3b-d4aee174b931-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f\" (UID: \"52f5c7d9-13fa-4e74-be3b-d4aee174b931\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.119738 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52f5c7d9-13fa-4e74-be3b-d4aee174b931-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f\" (UID: \"52f5c7d9-13fa-4e74-be3b-d4aee174b931\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.122707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/497404e0-8944-46a8-9d67-a4334950f54c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p\" (UID: \"497404e0-8944-46a8-9d67-a4334950f54c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.135103 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/497404e0-8944-46a8-9d67-a4334950f54c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p\" (UID: \"497404e0-8944-46a8-9d67-a4334950f54c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.151228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52f5c7d9-13fa-4e74-be3b-d4aee174b931-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f\" (UID: \"52f5c7d9-13fa-4e74-be3b-d4aee174b931\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.213433 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bnt\" (UniqueName: \"kubernetes.io/projected/7e55b3a4-cfae-41a1-a153-611e2e96dc75-kube-api-access-b8bnt\") pod \"observability-operator-59bdc8b94-l9hhp\" (UID: \"7e55b3a4-cfae-41a1-a153-611e2e96dc75\") " pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.213561 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e55b3a4-cfae-41a1-a153-611e2e96dc75-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l9hhp\" (UID: \"7e55b3a4-cfae-41a1-a153-611e2e96dc75\") " pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.218904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e55b3a4-cfae-41a1-a153-611e2e96dc75-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l9hhp\" (UID: \"7e55b3a4-cfae-41a1-a153-611e2e96dc75\") " pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.243897 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bnt\" (UniqueName: \"kubernetes.io/projected/7e55b3a4-cfae-41a1-a153-611e2e96dc75-kube-api-access-b8bnt\") pod \"observability-operator-59bdc8b94-l9hhp\" (UID: \"7e55b3a4-cfae-41a1-a153-611e2e96dc75\") " pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.266835 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.285986 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-76nsr"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.286810 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.288989 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6rd72" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.311417 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-76nsr"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.314918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrnh\" (UniqueName: \"kubernetes.io/projected/f7048824-b7bd-423e-b237-f4ccb584bb8a-kube-api-access-ztrnh\") pod \"perses-operator-5bf474d74f-76nsr\" (UID: \"f7048824-b7bd-423e-b237-f4ccb584bb8a\") " pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.315019 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7048824-b7bd-423e-b237-f4ccb584bb8a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-76nsr\" (UID: \"f7048824-b7bd-423e-b237-f4ccb584bb8a\") " pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.324065 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.335534 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.392224 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.416628 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrnh\" (UniqueName: \"kubernetes.io/projected/f7048824-b7bd-423e-b237-f4ccb584bb8a-kube-api-access-ztrnh\") pod \"perses-operator-5bf474d74f-76nsr\" (UID: \"f7048824-b7bd-423e-b237-f4ccb584bb8a\") " pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.416749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7048824-b7bd-423e-b237-f4ccb584bb8a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-76nsr\" (UID: \"f7048824-b7bd-423e-b237-f4ccb584bb8a\") " pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.417988 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7048824-b7bd-423e-b237-f4ccb584bb8a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-76nsr\" (UID: \"f7048824-b7bd-423e-b237-f4ccb584bb8a\") " pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.441939 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrnh\" (UniqueName: \"kubernetes.io/projected/f7048824-b7bd-423e-b237-f4ccb584bb8a-kube-api-access-ztrnh\") pod \"perses-operator-5bf474d74f-76nsr\" (UID: \"f7048824-b7bd-423e-b237-f4ccb584bb8a\") " pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.617445 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.634742 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p"] Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.703677 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" event={"ID":"497404e0-8944-46a8-9d67-a4334950f54c","Type":"ContainerStarted","Data":"dfd4e31021949099df1f404380a06ec2afb06f2cbd9b350028ee69d0cb37d64e"} Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.706705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" event={"ID":"654af63c-1337-4345-a2d6-4aa64462e8a9","Type":"ContainerStarted","Data":"bb5840b09620f0a62d2176dfd826a823c971dcb87f8df3811f59ecaaff320c7c"} Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.715836 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f"] Jan 21 17:28:12 crc kubenswrapper[4823]: W0121 17:28:12.739112 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f5c7d9_13fa_4e74_be3b_d4aee174b931.slice/crio-5b567389a84cc5e8e4f49d0f7beb99ec23feaadfcb803d324f211fa4fd549f36 WatchSource:0}: Error finding container 5b567389a84cc5e8e4f49d0f7beb99ec23feaadfcb803d324f211fa4fd549f36: Status 404 returned error can't find the container with id 5b567389a84cc5e8e4f49d0f7beb99ec23feaadfcb803d324f211fa4fd549f36 Jan 21 17:28:12 crc kubenswrapper[4823]: I0121 17:28:12.798402 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l9hhp"] Jan 21 17:28:12 crc kubenswrapper[4823]: W0121 17:28:12.803897 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e55b3a4_cfae_41a1_a153_611e2e96dc75.slice/crio-e10a0fbafb7c89ec0374d8eb316a15d543ed998390e17bc80c74e1480ab59017 WatchSource:0}: Error finding container e10a0fbafb7c89ec0374d8eb316a15d543ed998390e17bc80c74e1480ab59017: Status 404 returned error can't find the container with id e10a0fbafb7c89ec0374d8eb316a15d543ed998390e17bc80c74e1480ab59017 Jan 21 17:28:13 crc kubenswrapper[4823]: I0121 17:28:13.032126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-76nsr"] Jan 21 17:28:13 crc kubenswrapper[4823]: W0121 17:28:13.043173 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7048824_b7bd_423e_b237_f4ccb584bb8a.slice/crio-c1789b2508e2930dc9ccc1acf70c7e7c01926222cebdf942606342bfbe4cd42b WatchSource:0}: Error finding container c1789b2508e2930dc9ccc1acf70c7e7c01926222cebdf942606342bfbe4cd42b: Status 404 returned error can't find the container with id c1789b2508e2930dc9ccc1acf70c7e7c01926222cebdf942606342bfbe4cd42b Jan 21 17:28:13 crc kubenswrapper[4823]: I0121 17:28:13.714723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" event={"ID":"52f5c7d9-13fa-4e74-be3b-d4aee174b931","Type":"ContainerStarted","Data":"5b567389a84cc5e8e4f49d0f7beb99ec23feaadfcb803d324f211fa4fd549f36"} Jan 21 17:28:13 crc kubenswrapper[4823]: I0121 17:28:13.717098 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" event={"ID":"f7048824-b7bd-423e-b237-f4ccb584bb8a","Type":"ContainerStarted","Data":"c1789b2508e2930dc9ccc1acf70c7e7c01926222cebdf942606342bfbe4cd42b"} Jan 21 17:28:13 crc kubenswrapper[4823]: I0121 17:28:13.718906 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" event={"ID":"7e55b3a4-cfae-41a1-a153-611e2e96dc75","Type":"ContainerStarted","Data":"e10a0fbafb7c89ec0374d8eb316a15d543ed998390e17bc80c74e1480ab59017"} Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.818753 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" event={"ID":"7e55b3a4-cfae-41a1-a153-611e2e96dc75","Type":"ContainerStarted","Data":"4846a1c3d76f546fc94f759505399655362c5e723a072362c19435b434dc39a0"} Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.819346 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.821824 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.822176 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" event={"ID":"52f5c7d9-13fa-4e74-be3b-d4aee174b931","Type":"ContainerStarted","Data":"0a5ed8cea45a8fc3b69b50d8adc83211d7aff74e461c8cbad8a45cf34df96e4d"} Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.823597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" event={"ID":"f7048824-b7bd-423e-b237-f4ccb584bb8a","Type":"ContainerStarted","Data":"3b9e96c0f83ce5e41da3497002d8f168e9205656cc4b89056b2ca09ce60516fb"} Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.824273 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.826472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" event={"ID":"497404e0-8944-46a8-9d67-a4334950f54c","Type":"ContainerStarted","Data":"c1ad407cec5d0cb56e98fae3a3a5c7fc02f020244cd86ef7688f2c4f37b27243"} Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.828807 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" event={"ID":"654af63c-1337-4345-a2d6-4aa64462e8a9","Type":"ContainerStarted","Data":"99d5349d890dde0dc2996bd9810ed9122345bcf3736ec17020c55e1f7364ac2d"} Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.848386 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-l9hhp" podStartSLOduration=1.420840772 podStartE2EDuration="13.848368111s" podCreationTimestamp="2026-01-21 17:28:12 +0000 UTC" firstStartedPulling="2026-01-21 17:28:12.806527576 +0000 UTC m=+693.732658436" lastFinishedPulling="2026-01-21 17:28:25.234054915 +0000 UTC m=+706.160185775" observedRunningTime="2026-01-21 17:28:25.847140829 +0000 UTC m=+706.773271689" watchObservedRunningTime="2026-01-21 17:28:25.848368111 +0000 UTC m=+706.774498971" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.873806 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" podStartSLOduration=1.731441163 podStartE2EDuration="13.873785655s" podCreationTimestamp="2026-01-21 17:28:12 +0000 UTC" firstStartedPulling="2026-01-21 17:28:13.045250395 +0000 UTC m=+693.971381255" lastFinishedPulling="2026-01-21 17:28:25.187594887 +0000 UTC m=+706.113725747" observedRunningTime="2026-01-21 17:28:25.868310606 +0000 UTC m=+706.794441476" watchObservedRunningTime="2026-01-21 17:28:25.873785655 +0000 UTC m=+706.799916525" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.896175 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p" podStartSLOduration=2.368368542 podStartE2EDuration="14.896159671s" podCreationTimestamp="2026-01-21 17:28:11 +0000 UTC" firstStartedPulling="2026-01-21 17:28:12.659326066 +0000 UTC m=+693.585456926" lastFinishedPulling="2026-01-21 17:28:25.187117185 +0000 UTC m=+706.113248055" observedRunningTime="2026-01-21 17:28:25.891037472 +0000 UTC m=+706.817168332" watchObservedRunningTime="2026-01-21 17:28:25.896159671 +0000 UTC m=+706.822290531" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.930158 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f" podStartSLOduration=2.495480464 podStartE2EDuration="14.930140193s" podCreationTimestamp="2026-01-21 17:28:11 +0000 UTC" firstStartedPulling="2026-01-21 17:28:12.741386166 +0000 UTC m=+693.667517026" lastFinishedPulling="2026-01-21 17:28:25.176045895 +0000 UTC m=+706.102176755" observedRunningTime="2026-01-21 17:28:25.929686571 +0000 UTC m=+706.855817441" watchObservedRunningTime="2026-01-21 17:28:25.930140193 +0000 UTC m=+706.856271053" Jan 21 17:28:25 crc kubenswrapper[4823]: I0121 17:28:25.953208 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vmjmj" podStartSLOduration=2.063343824 podStartE2EDuration="14.953185447s" podCreationTimestamp="2026-01-21 17:28:11 +0000 UTC" firstStartedPulling="2026-01-21 17:28:12.2821798 +0000 UTC m=+693.208310660" lastFinishedPulling="2026-01-21 17:28:25.172021413 +0000 UTC m=+706.098152283" observedRunningTime="2026-01-21 17:28:25.951144305 +0000 UTC m=+706.877275185" watchObservedRunningTime="2026-01-21 17:28:25.953185447 +0000 UTC m=+706.879316317" Jan 21 17:28:32 crc kubenswrapper[4823]: I0121 17:28:32.623040 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-76nsr" Jan 21 17:28:45 crc kubenswrapper[4823]: I0121 17:28:45.070990 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:28:45 crc kubenswrapper[4823]: I0121 17:28:45.071895 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.239749 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf"] Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.243165 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.245817 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.259519 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf"] Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.395749 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tfh\" (UniqueName: \"kubernetes.io/projected/84775bf8-065e-4f98-987c-733201a3d87c-kube-api-access-r2tfh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.395814 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.395846 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.497309 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tfh\" (UniqueName: \"kubernetes.io/projected/84775bf8-065e-4f98-987c-733201a3d87c-kube-api-access-r2tfh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.497376 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.497416 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.498121 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.498311 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.523171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tfh\" (UniqueName: \"kubernetes.io/projected/84775bf8-065e-4f98-987c-733201a3d87c-kube-api-access-r2tfh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.589782 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.850866 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf"] Jan 21 17:28:50 crc kubenswrapper[4823]: I0121 17:28:50.976986 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" event={"ID":"84775bf8-065e-4f98-987c-733201a3d87c","Type":"ContainerStarted","Data":"eb54d2bf3c5240ba732625956e6f21fc24cbd1279e67e98c975e4109b1454893"} Jan 21 17:28:52 crc kubenswrapper[4823]: I0121 17:28:52.992224 4823 generic.go:334] "Generic (PLEG): container finished" podID="84775bf8-065e-4f98-987c-733201a3d87c" containerID="b037f88d5328ec8063604b8fa412e01c14df29ab035f0b5abc739465d3768e2e" exitCode=0 Jan 21 17:28:52 crc kubenswrapper[4823]: I0121 17:28:52.992381 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" event={"ID":"84775bf8-065e-4f98-987c-733201a3d87c","Type":"ContainerDied","Data":"b037f88d5328ec8063604b8fa412e01c14df29ab035f0b5abc739465d3768e2e"} Jan 21 17:28:56 crc kubenswrapper[4823]: I0121 17:28:56.014808 4823 generic.go:334] "Generic (PLEG): container finished" podID="84775bf8-065e-4f98-987c-733201a3d87c" containerID="06e6abdb9e82ed990d9a4faa96b269cce79a4ee9279995c73efdd61e431978d5" exitCode=0 Jan 21 17:28:56 crc kubenswrapper[4823]: I0121 17:28:56.014920 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" event={"ID":"84775bf8-065e-4f98-987c-733201a3d87c","Type":"ContainerDied","Data":"06e6abdb9e82ed990d9a4faa96b269cce79a4ee9279995c73efdd61e431978d5"} Jan 21 17:28:58 crc kubenswrapper[4823]: I0121 17:28:58.031230 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" event={"ID":"84775bf8-065e-4f98-987c-733201a3d87c","Type":"ContainerStarted","Data":"03e9cca0bb4cd2c3d243c26e026c8931f6cc7bf114066c644e18e6ea18640e00"} Jan 21 17:28:58 crc kubenswrapper[4823]: I0121 17:28:58.050764 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" podStartSLOduration=5.614746016 podStartE2EDuration="8.050743474s" podCreationTimestamp="2026-01-21 17:28:50 +0000 UTC" firstStartedPulling="2026-01-21 17:28:52.994377466 +0000 UTC m=+733.920508326" lastFinishedPulling="2026-01-21 17:28:55.430374924 +0000 UTC m=+736.356505784" observedRunningTime="2026-01-21 17:28:58.048538279 +0000 UTC m=+738.974669189" watchObservedRunningTime="2026-01-21 17:28:58.050743474 +0000 UTC m=+738.976874354" Jan 21 17:28:59 crc kubenswrapper[4823]: I0121 17:28:59.041187 4823 generic.go:334] "Generic (PLEG): container finished" podID="84775bf8-065e-4f98-987c-733201a3d87c" containerID="03e9cca0bb4cd2c3d243c26e026c8931f6cc7bf114066c644e18e6ea18640e00" exitCode=0 Jan 21 17:28:59 crc kubenswrapper[4823]: I0121 17:28:59.041271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" event={"ID":"84775bf8-065e-4f98-987c-733201a3d87c","Type":"ContainerDied","Data":"03e9cca0bb4cd2c3d243c26e026c8931f6cc7bf114066c644e18e6ea18640e00"} Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.339816 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.445541 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-bundle\") pod \"84775bf8-065e-4f98-987c-733201a3d87c\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.445621 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-util\") pod \"84775bf8-065e-4f98-987c-733201a3d87c\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.445702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2tfh\" (UniqueName: \"kubernetes.io/projected/84775bf8-065e-4f98-987c-733201a3d87c-kube-api-access-r2tfh\") pod \"84775bf8-065e-4f98-987c-733201a3d87c\" (UID: \"84775bf8-065e-4f98-987c-733201a3d87c\") " Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.446382 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-bundle" (OuterVolumeSpecName: "bundle") pod "84775bf8-065e-4f98-987c-733201a3d87c" (UID: "84775bf8-065e-4f98-987c-733201a3d87c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.452759 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84775bf8-065e-4f98-987c-733201a3d87c-kube-api-access-r2tfh" (OuterVolumeSpecName: "kube-api-access-r2tfh") pod "84775bf8-065e-4f98-987c-733201a3d87c" (UID: "84775bf8-065e-4f98-987c-733201a3d87c"). InnerVolumeSpecName "kube-api-access-r2tfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.457636 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-util" (OuterVolumeSpecName: "util") pod "84775bf8-065e-4f98-987c-733201a3d87c" (UID: "84775bf8-065e-4f98-987c-733201a3d87c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.546956 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.547030 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84775bf8-065e-4f98-987c-733201a3d87c-util\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:00 crc kubenswrapper[4823]: I0121 17:29:00.547046 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2tfh\" (UniqueName: \"kubernetes.io/projected/84775bf8-065e-4f98-987c-733201a3d87c-kube-api-access-r2tfh\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:01 crc kubenswrapper[4823]: I0121 17:29:01.059995 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" event={"ID":"84775bf8-065e-4f98-987c-733201a3d87c","Type":"ContainerDied","Data":"eb54d2bf3c5240ba732625956e6f21fc24cbd1279e67e98c975e4109b1454893"} Jan 21 17:29:01 crc kubenswrapper[4823]: I0121 17:29:01.060037 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb54d2bf3c5240ba732625956e6f21fc24cbd1279e67e98c975e4109b1454893" Jan 21 17:29:01 crc kubenswrapper[4823]: I0121 17:29:01.060147 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.645599 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sc268"] Jan 21 17:29:06 crc kubenswrapper[4823]: E0121 17:29:06.646477 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="pull" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.646493 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="pull" Jan 21 17:29:06 crc kubenswrapper[4823]: E0121 17:29:06.646522 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="extract" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.646531 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="extract" Jan 21 17:29:06 crc kubenswrapper[4823]: E0121 17:29:06.646544 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="util" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.646552 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="util" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.646675 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="84775bf8-065e-4f98-987c-733201a3d87c" containerName="extract" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.647179 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-sc268" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.650407 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6rdsc" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.650576 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.650761 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.662699 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sc268"] Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.725529 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxhr\" (UniqueName: \"kubernetes.io/projected/702ac8ba-dd10-4b6e-978b-cf873a40ceb3-kube-api-access-rbxhr\") pod \"nmstate-operator-646758c888-sc268\" (UID: \"702ac8ba-dd10-4b6e-978b-cf873a40ceb3\") " pod="openshift-nmstate/nmstate-operator-646758c888-sc268" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.826422 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxhr\" (UniqueName: \"kubernetes.io/projected/702ac8ba-dd10-4b6e-978b-cf873a40ceb3-kube-api-access-rbxhr\") pod \"nmstate-operator-646758c888-sc268\" (UID: \"702ac8ba-dd10-4b6e-978b-cf873a40ceb3\") " pod="openshift-nmstate/nmstate-operator-646758c888-sc268" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.843718 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxhr\" (UniqueName: \"kubernetes.io/projected/702ac8ba-dd10-4b6e-978b-cf873a40ceb3-kube-api-access-rbxhr\") pod \"nmstate-operator-646758c888-sc268\" (UID: \"702ac8ba-dd10-4b6e-978b-cf873a40ceb3\") " pod="openshift-nmstate/nmstate-operator-646758c888-sc268" Jan 21 17:29:06 crc kubenswrapper[4823]: I0121 17:29:06.965032 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-sc268" Jan 21 17:29:07 crc kubenswrapper[4823]: I0121 17:29:07.184723 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sc268"] Jan 21 17:29:08 crc kubenswrapper[4823]: I0121 17:29:08.099207 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-sc268" event={"ID":"702ac8ba-dd10-4b6e-978b-cf873a40ceb3","Type":"ContainerStarted","Data":"a07ff80beae7bc874b1a03e1846a99e344af1b4eacb609a365f43960e9d32574"} Jan 21 17:29:09 crc kubenswrapper[4823]: I0121 17:29:09.624880 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 17:29:12 crc kubenswrapper[4823]: I0121 17:29:12.126027 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-sc268" event={"ID":"702ac8ba-dd10-4b6e-978b-cf873a40ceb3","Type":"ContainerStarted","Data":"8a1518c69051f8849dca0d5dc334dfd9ffe8823f43d6b54eb97c2f1340f59337"} Jan 21 17:29:12 crc kubenswrapper[4823]: I0121 17:29:12.151401 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-sc268" podStartSLOduration=1.465217504 podStartE2EDuration="6.151378795s" podCreationTimestamp="2026-01-21 17:29:06 +0000 UTC" firstStartedPulling="2026-01-21 17:29:07.194108004 +0000 UTC m=+748.120238874" lastFinishedPulling="2026-01-21 17:29:11.880269305 +0000 UTC m=+752.806400165" observedRunningTime="2026-01-21 17:29:12.145655183 +0000 UTC m=+753.071786043" watchObservedRunningTime="2026-01-21 17:29:12.151378795 +0000 UTC m=+753.077509655" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.084389 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p4sm7"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.086465 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.089294 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-87lgz" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.102277 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.103381 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.107030 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p4sm7"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.110922 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.126806 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ftkrm"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.127760 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.172158 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.243124 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244466 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrn2\" (UniqueName: \"kubernetes.io/projected/8ab6a957-a277-4d36-86c5-48dcbe485a49-kube-api-access-qrrn2\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244558 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-ovs-socket\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssq7\" (UniqueName: \"kubernetes.io/projected/539647c9-ef8e-4225-b49b-0f5da3130d48-kube-api-access-9ssq7\") pod \"nmstate-metrics-54757c584b-p4sm7\" (UID: \"539647c9-ef8e-4225-b49b-0f5da3130d48\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244670 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-nmstate-lock\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244706 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-dbus-socket\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg7zw\" (UniqueName: \"kubernetes.io/projected/fa05f98d-769b-41db-9bc1-5d8f19eff210-kube-api-access-fg7zw\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.244929 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa05f98d-769b-41db-9bc1-5d8f19eff210-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.245007 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.248101 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gkqgn" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.248245 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.248318 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.250018 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.345775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-dbus-socket\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.345847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg7zw\" (UniqueName: \"kubernetes.io/projected/fa05f98d-769b-41db-9bc1-5d8f19eff210-kube-api-access-fg7zw\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.345892 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa05f98d-769b-41db-9bc1-5d8f19eff210-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.345917 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrn2\" (UniqueName: \"kubernetes.io/projected/8ab6a957-a277-4d36-86c5-48dcbe485a49-kube-api-access-qrrn2\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.345943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-ovs-socket\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.345992 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/42c0afb2-20ca-4f23-ab57-db27e84d475a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.346012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmb7c\" (UniqueName: \"kubernetes.io/projected/42c0afb2-20ca-4f23-ab57-db27e84d475a-kube-api-access-nmb7c\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.346048 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ssq7\" (UniqueName: \"kubernetes.io/projected/539647c9-ef8e-4225-b49b-0f5da3130d48-kube-api-access-9ssq7\") pod \"nmstate-metrics-54757c584b-p4sm7\" (UID: \"539647c9-ef8e-4225-b49b-0f5da3130d48\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.346070 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/42c0afb2-20ca-4f23-ab57-db27e84d475a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.346100 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-nmstate-lock\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.346173 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-nmstate-lock\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.346453 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-dbus-socket\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: E0121 17:29:13.346806 4823 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 17:29:13 crc kubenswrapper[4823]: E0121 17:29:13.346867 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa05f98d-769b-41db-9bc1-5d8f19eff210-tls-key-pair podName:fa05f98d-769b-41db-9bc1-5d8f19eff210 nodeName:}" failed. No retries permitted until 2026-01-21 17:29:13.846833878 +0000 UTC m=+754.772964738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/fa05f98d-769b-41db-9bc1-5d8f19eff210-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-9kdw7" (UID: "fa05f98d-769b-41db-9bc1-5d8f19eff210") : secret "openshift-nmstate-webhook" not found Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.347118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8ab6a957-a277-4d36-86c5-48dcbe485a49-ovs-socket\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.389502 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ssq7\" (UniqueName: \"kubernetes.io/projected/539647c9-ef8e-4225-b49b-0f5da3130d48-kube-api-access-9ssq7\") pod \"nmstate-metrics-54757c584b-p4sm7\" (UID: \"539647c9-ef8e-4225-b49b-0f5da3130d48\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.389663 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg7zw\" (UniqueName: \"kubernetes.io/projected/fa05f98d-769b-41db-9bc1-5d8f19eff210-kube-api-access-fg7zw\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.396389 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrn2\" (UniqueName: \"kubernetes.io/projected/8ab6a957-a277-4d36-86c5-48dcbe485a49-kube-api-access-qrrn2\") pod \"nmstate-handler-ftkrm\" (UID: \"8ab6a957-a277-4d36-86c5-48dcbe485a49\") " pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.409154 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.436562 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-867b7744df-qnp5t"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.437555 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.448156 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/42c0afb2-20ca-4f23-ab57-db27e84d475a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.448202 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmb7c\" (UniqueName: \"kubernetes.io/projected/42c0afb2-20ca-4f23-ab57-db27e84d475a-kube-api-access-nmb7c\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.448238 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/42c0afb2-20ca-4f23-ab57-db27e84d475a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.450036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/42c0afb2-20ca-4f23-ab57-db27e84d475a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.464980 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-867b7744df-qnp5t"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.481716 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.529127 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/42c0afb2-20ca-4f23-ab57-db27e84d475a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.529169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmb7c\" (UniqueName: \"kubernetes.io/projected/42c0afb2-20ca-4f23-ab57-db27e84d475a-kube-api-access-nmb7c\") pod \"nmstate-console-plugin-7754f76f8b-bhp5f\" (UID: \"42c0afb2-20ca-4f23-ab57-db27e84d475a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550159 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-oauth-config\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550440 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-trusted-ca-bundle\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550488 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-config\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-serving-cert\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550553 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv4w9\" (UniqueName: \"kubernetes.io/projected/a3b3983b-6c6d-45dc-af7d-bd148bda163e-kube-api-access-rv4w9\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550613 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-service-ca\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.550645 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-oauth-serving-cert\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.563235 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-oauth-config\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-trusted-ca-bundle\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651640 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-config\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651663 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-serving-cert\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651682 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv4w9\" (UniqueName: \"kubernetes.io/projected/a3b3983b-6c6d-45dc-af7d-bd148bda163e-kube-api-access-rv4w9\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-service-ca\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.651742 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-oauth-serving-cert\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.652776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-oauth-serving-cert\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.653892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-service-ca\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.654230 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-config\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.654656 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b3983b-6c6d-45dc-af7d-bd148bda163e-trusted-ca-bundle\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.660529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-serving-cert\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.661534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3b3983b-6c6d-45dc-af7d-bd148bda163e-console-oauth-config\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.674755 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv4w9\" (UniqueName: \"kubernetes.io/projected/a3b3983b-6c6d-45dc-af7d-bd148bda163e-kube-api-access-rv4w9\") pod \"console-867b7744df-qnp5t\" (UID: \"a3b3983b-6c6d-45dc-af7d-bd148bda163e\") " pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: W0121 17:29:13.676468 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod539647c9_ef8e_4225_b49b_0f5da3130d48.slice/crio-2536edebd4fb4fd9a8d7817d3814a11850a58ffe7cbb714cbcd91e3302a9bf7b WatchSource:0}: Error finding container 2536edebd4fb4fd9a8d7817d3814a11850a58ffe7cbb714cbcd91e3302a9bf7b: Status 404 returned error can't find the container with id 2536edebd4fb4fd9a8d7817d3814a11850a58ffe7cbb714cbcd91e3302a9bf7b Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.676817 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p4sm7"] Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.792474 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f"] Jan 21 17:29:13 crc kubenswrapper[4823]: W0121 17:29:13.796590 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c0afb2_20ca_4f23_ab57_db27e84d475a.slice/crio-d88f117e83ec1d37be2db179294c65807f91a4490e347ad3e7fcc4e3a80812c2 WatchSource:0}: Error finding container d88f117e83ec1d37be2db179294c65807f91a4490e347ad3e7fcc4e3a80812c2: Status 404 returned error can't find the container with id d88f117e83ec1d37be2db179294c65807f91a4490e347ad3e7fcc4e3a80812c2 Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.799405 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.856404 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa05f98d-769b-41db-9bc1-5d8f19eff210-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:13 crc kubenswrapper[4823]: I0121 17:29:13.861774 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa05f98d-769b-41db-9bc1-5d8f19eff210-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9kdw7\" (UID: \"fa05f98d-769b-41db-9bc1-5d8f19eff210\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:14 crc kubenswrapper[4823]: I0121 17:29:14.022718 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:14 crc kubenswrapper[4823]: I0121 17:29:14.173660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ftkrm" event={"ID":"8ab6a957-a277-4d36-86c5-48dcbe485a49","Type":"ContainerStarted","Data":"3306993d845a1094f7f564f090ff7cc93ab0cb5f2b86a907451fb634406a4880"} Jan 21 17:29:14 crc kubenswrapper[4823]: I0121 17:29:14.175187 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" event={"ID":"539647c9-ef8e-4225-b49b-0f5da3130d48","Type":"ContainerStarted","Data":"2536edebd4fb4fd9a8d7817d3814a11850a58ffe7cbb714cbcd91e3302a9bf7b"} Jan 21 17:29:14 crc kubenswrapper[4823]: I0121 17:29:14.176616 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" event={"ID":"42c0afb2-20ca-4f23-ab57-db27e84d475a","Type":"ContainerStarted","Data":"d88f117e83ec1d37be2db179294c65807f91a4490e347ad3e7fcc4e3a80812c2"} Jan 21 17:29:14 crc kubenswrapper[4823]: I0121 17:29:14.228666 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-867b7744df-qnp5t"] Jan 21 17:29:14 crc kubenswrapper[4823]: I0121 17:29:14.231987 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7"] Jan 21 17:29:14 crc kubenswrapper[4823]: W0121 17:29:14.235560 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3b3983b_6c6d_45dc_af7d_bd148bda163e.slice/crio-99ab74d465cc3a5ea21696e22119937c1189d7edc8d6c958c55e7a3e8cb6f623 WatchSource:0}: Error finding container 99ab74d465cc3a5ea21696e22119937c1189d7edc8d6c958c55e7a3e8cb6f623: Status 404 returned error can't find the container with id 99ab74d465cc3a5ea21696e22119937c1189d7edc8d6c958c55e7a3e8cb6f623 Jan 21 17:29:14 crc kubenswrapper[4823]: W0121 17:29:14.236125 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa05f98d_769b_41db_9bc1_5d8f19eff210.slice/crio-79ce0cc0dc037f5d9cf229e86a1cf36bcb927f87f46d10385039d22cac2e07b3 WatchSource:0}: Error finding container 79ce0cc0dc037f5d9cf229e86a1cf36bcb927f87f46d10385039d22cac2e07b3: Status 404 returned error can't find the container with id 79ce0cc0dc037f5d9cf229e86a1cf36bcb927f87f46d10385039d22cac2e07b3 Jan 21 17:29:15 crc kubenswrapper[4823]: I0121 17:29:15.070362 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:29:15 crc kubenswrapper[4823]: I0121 17:29:15.070722 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:29:15 crc kubenswrapper[4823]: I0121 17:29:15.185524 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" event={"ID":"fa05f98d-769b-41db-9bc1-5d8f19eff210","Type":"ContainerStarted","Data":"79ce0cc0dc037f5d9cf229e86a1cf36bcb927f87f46d10385039d22cac2e07b3"} Jan 21 17:29:15 crc kubenswrapper[4823]: I0121 17:29:15.188492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-867b7744df-qnp5t" event={"ID":"a3b3983b-6c6d-45dc-af7d-bd148bda163e","Type":"ContainerStarted","Data":"13decff5701e04b457aefff87c427d15b0617176144ab4061119207e15c08f2a"} Jan 21 17:29:15 crc kubenswrapper[4823]: I0121 17:29:15.188546 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-867b7744df-qnp5t" event={"ID":"a3b3983b-6c6d-45dc-af7d-bd148bda163e","Type":"ContainerStarted","Data":"99ab74d465cc3a5ea21696e22119937c1189d7edc8d6c958c55e7a3e8cb6f623"} Jan 21 17:29:15 crc kubenswrapper[4823]: I0121 17:29:15.208530 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-867b7744df-qnp5t" podStartSLOduration=2.208508568 podStartE2EDuration="2.208508568s" podCreationTimestamp="2026-01-21 17:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:29:15.203678488 +0000 UTC m=+756.129809348" watchObservedRunningTime="2026-01-21 17:29:15.208508568 +0000 UTC m=+756.134639428" Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.213345 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" event={"ID":"fa05f98d-769b-41db-9bc1-5d8f19eff210","Type":"ContainerStarted","Data":"93de840742e03dea48b9840dfe25604b597aa6a05dbb929017e455fd5af5dbfa"} Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.214158 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.215286 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" event={"ID":"539647c9-ef8e-4225-b49b-0f5da3130d48","Type":"ContainerStarted","Data":"ea40dfa17744e0bf283676b560e2c02eaa5a195a337cec6cb43c76c4b1400482"} Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.218627 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" event={"ID":"42c0afb2-20ca-4f23-ab57-db27e84d475a","Type":"ContainerStarted","Data":"718c28bcb760ee538ef6e77d751b494274f0213095e779e8d8a437d748915ca7"} Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.220603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ftkrm" event={"ID":"8ab6a957-a277-4d36-86c5-48dcbe485a49","Type":"ContainerStarted","Data":"f53c3cf6d55da17da79cfa6662d78f826c13cac2f30c81de39b335dbdb8d8090"} Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.220845 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.242297 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" podStartSLOduration=1.84550648 podStartE2EDuration="5.242272399s" podCreationTimestamp="2026-01-21 17:29:13 +0000 UTC" firstStartedPulling="2026-01-21 17:29:14.238624771 +0000 UTC m=+755.164755621" lastFinishedPulling="2026-01-21 17:29:17.63539064 +0000 UTC m=+758.561521540" observedRunningTime="2026-01-21 17:29:18.234952187 +0000 UTC m=+759.161083047" watchObservedRunningTime="2026-01-21 17:29:18.242272399 +0000 UTC m=+759.168403289" Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.262451 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhp5f" podStartSLOduration=1.408564331 podStartE2EDuration="5.26239715s" podCreationTimestamp="2026-01-21 17:29:13 +0000 UTC" firstStartedPulling="2026-01-21 17:29:13.798987185 +0000 UTC m=+754.725118045" lastFinishedPulling="2026-01-21 17:29:17.652819994 +0000 UTC m=+758.578950864" observedRunningTime="2026-01-21 17:29:18.252977176 +0000 UTC m=+759.179108076" watchObservedRunningTime="2026-01-21 17:29:18.26239715 +0000 UTC m=+759.188528010" Jan 21 17:29:18 crc kubenswrapper[4823]: I0121 17:29:18.279255 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ftkrm" podStartSLOduration=1.177060798 podStartE2EDuration="5.27923673s" podCreationTimestamp="2026-01-21 17:29:13 +0000 UTC" firstStartedPulling="2026-01-21 17:29:13.551599256 +0000 UTC m=+754.477730116" lastFinishedPulling="2026-01-21 17:29:17.653775188 +0000 UTC m=+758.579906048" observedRunningTime="2026-01-21 17:29:18.27762055 +0000 UTC m=+759.203751410" watchObservedRunningTime="2026-01-21 17:29:18.27923673 +0000 UTC m=+759.205367590" Jan 21 17:29:23 crc kubenswrapper[4823]: I0121 17:29:23.517215 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ftkrm" Jan 21 17:29:23 crc kubenswrapper[4823]: I0121 17:29:23.800934 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:23 crc kubenswrapper[4823]: I0121 17:29:23.801381 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:23 crc kubenswrapper[4823]: I0121 17:29:23.808322 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:24 crc kubenswrapper[4823]: I0121 17:29:24.269562 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-867b7744df-qnp5t" Jan 21 17:29:24 crc kubenswrapper[4823]: I0121 17:29:24.333248 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4dj4t"] Jan 21 17:29:34 crc kubenswrapper[4823]: I0121 17:29:34.031021 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9kdw7" Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.070296 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.070973 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.071031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.071750 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01551426f5ae121d56576d8ebbe31d7f30aa5b3ef8f744c35e0d4bccddcab2b3"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.071801 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://01551426f5ae121d56576d8ebbe31d7f30aa5b3ef8f744c35e0d4bccddcab2b3" gracePeriod=600 Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.402237 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="01551426f5ae121d56576d8ebbe31d7f30aa5b3ef8f744c35e0d4bccddcab2b3" exitCode=0 Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.403130 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"01551426f5ae121d56576d8ebbe31d7f30aa5b3ef8f744c35e0d4bccddcab2b3"} Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.403227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"fcf0ff8adb2bfb185b4729793f83cb8f174b95d31f0510e681726cc4e1eb2380"} Jan 21 17:29:45 crc kubenswrapper[4823]: I0121 17:29:45.403263 4823 scope.go:117] "RemoveContainer" containerID="a5e3a96197b34c3415baab0fe948edc38ebee5874db835b6efcf008c938778e5" Jan 21 17:29:46 crc kubenswrapper[4823]: I0121 17:29:46.415596 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" event={"ID":"539647c9-ef8e-4225-b49b-0f5da3130d48","Type":"ContainerStarted","Data":"bc600343fa60462787c1b4d5ddb9bcaf4cc238719d9820882254c4a8ccb5c6ba"} Jan 21 17:29:46 crc kubenswrapper[4823]: I0121 17:29:46.435257 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-p4sm7" podStartSLOduration=1.834406574 podStartE2EDuration="33.435233924s" podCreationTimestamp="2026-01-21 17:29:13 +0000 UTC" firstStartedPulling="2026-01-21 17:29:13.679175672 +0000 UTC m=+754.605306532" lastFinishedPulling="2026-01-21 17:29:45.280003012 +0000 UTC m=+786.206133882" observedRunningTime="2026-01-21 17:29:46.432497236 +0000 UTC m=+787.358628096" watchObservedRunningTime="2026-01-21 17:29:46.435233924 +0000 UTC m=+787.361364784" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:47.999539 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j"] Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.001568 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.003704 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.015616 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j"] Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.094367 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.094463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.094512 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvlp\" (UniqueName: \"kubernetes.io/projected/4186705f-031e-4b39-ae26-9835b3b8a619-kube-api-access-2cvlp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.195378 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cvlp\" (UniqueName: \"kubernetes.io/projected/4186705f-031e-4b39-ae26-9835b3b8a619-kube-api-access-2cvlp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.195483 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.195539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.196115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.196152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.216762 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cvlp\" (UniqueName: \"kubernetes.io/projected/4186705f-031e-4b39-ae26-9835b3b8a619-kube-api-access-2cvlp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.318404 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:48 crc kubenswrapper[4823]: I0121 17:29:48.578418 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j"] Jan 21 17:29:48 crc kubenswrapper[4823]: W0121 17:29:48.592058 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4186705f_031e_4b39_ae26_9835b3b8a619.slice/crio-e7dc0eb6ace79d0145e4046893b51d5afa4898aa664ce45df3614ed28ca179e0 WatchSource:0}: Error finding container e7dc0eb6ace79d0145e4046893b51d5afa4898aa664ce45df3614ed28ca179e0: Status 404 returned error can't find the container with id e7dc0eb6ace79d0145e4046893b51d5afa4898aa664ce45df3614ed28ca179e0 Jan 21 17:29:49 crc kubenswrapper[4823]: I0121 17:29:49.422739 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4dj4t" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerName="console" containerID="cri-o://07d73f41bcc4e03bb8243a1e7bd5640de55ffcfba9a25714c8c90f58538cb19c" gracePeriod=15 Jan 21 17:29:49 crc kubenswrapper[4823]: I0121 17:29:49.441944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" event={"ID":"4186705f-031e-4b39-ae26-9835b3b8a619","Type":"ContainerStarted","Data":"e7dc0eb6ace79d0145e4046893b51d5afa4898aa664ce45df3614ed28ca179e0"} Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.309429 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4pt7t"] Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.311106 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.323873 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pt7t"] Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.346706 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-utilities\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.346777 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-catalog-content\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.346819 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtxwm\" (UniqueName: \"kubernetes.io/projected/e0fd8618-b533-4885-a2c7-7782d74ba56f-kube-api-access-rtxwm\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.448072 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-catalog-content\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.448131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtxwm\" (UniqueName: \"kubernetes.io/projected/e0fd8618-b533-4885-a2c7-7782d74ba56f-kube-api-access-rtxwm\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.448262 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-utilities\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.449477 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-catalog-content\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.449606 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-utilities\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.474233 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtxwm\" (UniqueName: \"kubernetes.io/projected/e0fd8618-b533-4885-a2c7-7782d74ba56f-kube-api-access-rtxwm\") pod \"redhat-operators-4pt7t\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:50 crc kubenswrapper[4823]: I0121 17:29:50.630316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.143011 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pt7t"] Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.472627 4823 generic.go:334] "Generic (PLEG): container finished" podID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerID="a64da2b9d00c5a067bf42dd11fecf431dda6fff18e112f586e4289aded070c0b" exitCode=0 Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.472755 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerDied","Data":"a64da2b9d00c5a067bf42dd11fecf431dda6fff18e112f586e4289aded070c0b"} Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.473523 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerStarted","Data":"c8c9a5cb1d228a39c04696ea38298cd93cd9bf9a9496775c667f4c54a508284d"} Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.482950 4823 generic.go:334] "Generic (PLEG): container finished" podID="4186705f-031e-4b39-ae26-9835b3b8a619" containerID="c5015c9f4504c285a57787b5afdd79ec72de941baaa8ccf0f76ee8e2556d22b9" exitCode=0 Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.483154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" event={"ID":"4186705f-031e-4b39-ae26-9835b3b8a619","Type":"ContainerDied","Data":"c5015c9f4504c285a57787b5afdd79ec72de941baaa8ccf0f76ee8e2556d22b9"} Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.494297 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4dj4t_ac886837-67ac-48e7-b5cb-024a0ed1ea01/console/0.log" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.494364 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerID="07d73f41bcc4e03bb8243a1e7bd5640de55ffcfba9a25714c8c90f58538cb19c" exitCode=2 Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.494415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4dj4t" event={"ID":"ac886837-67ac-48e7-b5cb-024a0ed1ea01","Type":"ContainerDied","Data":"07d73f41bcc4e03bb8243a1e7bd5640de55ffcfba9a25714c8c90f58538cb19c"} Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.681388 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4dj4t_ac886837-67ac-48e7-b5cb-024a0ed1ea01/console/0.log" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.681475 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878352 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-config\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878430 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-oauth-serving-cert\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878476 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-service-ca\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878543 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-trusted-ca-bundle\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878630 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-serving-cert\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878671 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92dl\" (UniqueName: \"kubernetes.io/projected/ac886837-67ac-48e7-b5cb-024a0ed1ea01-kube-api-access-q92dl\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.878714 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-oauth-config\") pod \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\" (UID: \"ac886837-67ac-48e7-b5cb-024a0ed1ea01\") " Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.879408 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.879442 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.879486 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.879465 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-config" (OuterVolumeSpecName: "console-config") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.896382 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.898391 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.898393 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac886837-67ac-48e7-b5cb-024a0ed1ea01-kube-api-access-q92dl" (OuterVolumeSpecName: "kube-api-access-q92dl") pod "ac886837-67ac-48e7-b5cb-024a0ed1ea01" (UID: "ac886837-67ac-48e7-b5cb-024a0ed1ea01"). InnerVolumeSpecName "kube-api-access-q92dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980479 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980546 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980563 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q92dl\" (UniqueName: \"kubernetes.io/projected/ac886837-67ac-48e7-b5cb-024a0ed1ea01-kube-api-access-q92dl\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980585 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980601 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980614 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:51 crc kubenswrapper[4823]: I0121 17:29:51.980628 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac886837-67ac-48e7-b5cb-024a0ed1ea01-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:52 crc kubenswrapper[4823]: I0121 17:29:52.501845 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4dj4t_ac886837-67ac-48e7-b5cb-024a0ed1ea01/console/0.log" Jan 21 17:29:52 crc kubenswrapper[4823]: I0121 17:29:52.501944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4dj4t" event={"ID":"ac886837-67ac-48e7-b5cb-024a0ed1ea01","Type":"ContainerDied","Data":"0fca38632b59fddb582d07b781c01bee6f556d10b794276431bb546df00f553e"} Jan 21 17:29:52 crc kubenswrapper[4823]: I0121 17:29:52.502011 4823 scope.go:117] "RemoveContainer" containerID="07d73f41bcc4e03bb8243a1e7bd5640de55ffcfba9a25714c8c90f58538cb19c" Jan 21 17:29:52 crc kubenswrapper[4823]: I0121 17:29:52.502021 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4dj4t" Jan 21 17:29:52 crc kubenswrapper[4823]: I0121 17:29:52.537088 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4dj4t"] Jan 21 17:29:52 crc kubenswrapper[4823]: I0121 17:29:52.541347 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4dj4t"] Jan 21 17:29:53 crc kubenswrapper[4823]: I0121 17:29:53.352561 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" path="/var/lib/kubelet/pods/ac886837-67ac-48e7-b5cb-024a0ed1ea01/volumes" Jan 21 17:29:55 crc kubenswrapper[4823]: I0121 17:29:55.538004 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerStarted","Data":"ee0f0486fe12400ce7d7c80e6ef5de2017566864616bed491ba287855d470f9e"} Jan 21 17:29:55 crc kubenswrapper[4823]: I0121 17:29:55.540545 4823 generic.go:334] "Generic (PLEG): container finished" podID="4186705f-031e-4b39-ae26-9835b3b8a619" containerID="b28c511338d897bafe528dae7bcfe01939a55345e2e2ec6ea908edcdb3267c72" exitCode=0 Jan 21 17:29:55 crc kubenswrapper[4823]: I0121 17:29:55.540628 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" event={"ID":"4186705f-031e-4b39-ae26-9835b3b8a619","Type":"ContainerDied","Data":"b28c511338d897bafe528dae7bcfe01939a55345e2e2ec6ea908edcdb3267c72"} Jan 21 17:29:56 crc kubenswrapper[4823]: I0121 17:29:56.552525 4823 generic.go:334] "Generic (PLEG): container finished" podID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerID="ee0f0486fe12400ce7d7c80e6ef5de2017566864616bed491ba287855d470f9e" exitCode=0 Jan 21 17:29:56 crc kubenswrapper[4823]: I0121 17:29:56.552615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerDied","Data":"ee0f0486fe12400ce7d7c80e6ef5de2017566864616bed491ba287855d470f9e"} Jan 21 17:29:56 crc kubenswrapper[4823]: I0121 17:29:56.562902 4823 generic.go:334] "Generic (PLEG): container finished" podID="4186705f-031e-4b39-ae26-9835b3b8a619" containerID="c31c5a90e0c7639ae1d5938f3ce0cb8671b552ca0857964fef6bb2d17343941e" exitCode=0 Jan 21 17:29:56 crc kubenswrapper[4823]: I0121 17:29:56.562969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" event={"ID":"4186705f-031e-4b39-ae26-9835b3b8a619","Type":"ContainerDied","Data":"c31c5a90e0c7639ae1d5938f3ce0cb8671b552ca0857964fef6bb2d17343941e"} Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.573514 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerStarted","Data":"11f3685c9be16e9d965b1b48489852f2bb9dfa583b7ca57932d4db988e2a4e04"} Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.597352 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4pt7t" podStartSLOduration=2.106407129 podStartE2EDuration="7.597332046s" podCreationTimestamp="2026-01-21 17:29:50 +0000 UTC" firstStartedPulling="2026-01-21 17:29:51.476269641 +0000 UTC m=+792.402400511" lastFinishedPulling="2026-01-21 17:29:56.967194568 +0000 UTC m=+797.893325428" observedRunningTime="2026-01-21 17:29:57.594360412 +0000 UTC m=+798.520491282" watchObservedRunningTime="2026-01-21 17:29:57.597332046 +0000 UTC m=+798.523462916" Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.867761 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.991421 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-bundle\") pod \"4186705f-031e-4b39-ae26-9835b3b8a619\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.991551 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cvlp\" (UniqueName: \"kubernetes.io/projected/4186705f-031e-4b39-ae26-9835b3b8a619-kube-api-access-2cvlp\") pod \"4186705f-031e-4b39-ae26-9835b3b8a619\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.991574 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-util\") pod \"4186705f-031e-4b39-ae26-9835b3b8a619\" (UID: \"4186705f-031e-4b39-ae26-9835b3b8a619\") " Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.992421 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-bundle" (OuterVolumeSpecName: "bundle") pod "4186705f-031e-4b39-ae26-9835b3b8a619" (UID: "4186705f-031e-4b39-ae26-9835b3b8a619"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:29:57 crc kubenswrapper[4823]: I0121 17:29:57.998438 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4186705f-031e-4b39-ae26-9835b3b8a619-kube-api-access-2cvlp" (OuterVolumeSpecName: "kube-api-access-2cvlp") pod "4186705f-031e-4b39-ae26-9835b3b8a619" (UID: "4186705f-031e-4b39-ae26-9835b3b8a619"). InnerVolumeSpecName "kube-api-access-2cvlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.004396 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-util" (OuterVolumeSpecName: "util") pod "4186705f-031e-4b39-ae26-9835b3b8a619" (UID: "4186705f-031e-4b39-ae26-9835b3b8a619"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.093190 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.093250 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cvlp\" (UniqueName: \"kubernetes.io/projected/4186705f-031e-4b39-ae26-9835b3b8a619-kube-api-access-2cvlp\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.093266 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4186705f-031e-4b39-ae26-9835b3b8a619-util\") on node \"crc\" DevicePath \"\"" Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.584648 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" event={"ID":"4186705f-031e-4b39-ae26-9835b3b8a619","Type":"ContainerDied","Data":"e7dc0eb6ace79d0145e4046893b51d5afa4898aa664ce45df3614ed28ca179e0"} Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.584703 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7dc0eb6ace79d0145e4046893b51d5afa4898aa664ce45df3614ed28ca179e0" Jan 21 17:29:58 crc kubenswrapper[4823]: I0121 17:29:58.584739 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.183509 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r"] Jan 21 17:30:00 crc kubenswrapper[4823]: E0121 17:30:00.184339 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerName="console" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.184357 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerName="console" Jan 21 17:30:00 crc kubenswrapper[4823]: E0121 17:30:00.184371 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="extract" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.184378 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="extract" Jan 21 17:30:00 crc kubenswrapper[4823]: E0121 17:30:00.184389 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="pull" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.184395 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="pull" Jan 21 17:30:00 crc kubenswrapper[4823]: E0121 17:30:00.184407 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="util" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.184413 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="util" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.184549 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4186705f-031e-4b39-ae26-9835b3b8a619" containerName="extract" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.184561 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac886837-67ac-48e7-b5cb-024a0ed1ea01" containerName="console" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.185114 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.186999 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.187573 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.214220 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r"] Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.326437 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef08832-b476-4255-b562-5aa266113f1f-config-volume\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.326521 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef08832-b476-4255-b562-5aa266113f1f-secret-volume\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.326575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5b5s\" (UniqueName: \"kubernetes.io/projected/2ef08832-b476-4255-b562-5aa266113f1f-kube-api-access-x5b5s\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.428545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef08832-b476-4255-b562-5aa266113f1f-config-volume\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.428621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef08832-b476-4255-b562-5aa266113f1f-secret-volume\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.428707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5b5s\" (UniqueName: \"kubernetes.io/projected/2ef08832-b476-4255-b562-5aa266113f1f-kube-api-access-x5b5s\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.429966 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef08832-b476-4255-b562-5aa266113f1f-config-volume\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.434920 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef08832-b476-4255-b562-5aa266113f1f-secret-volume\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.450111 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5b5s\" (UniqueName: \"kubernetes.io/projected/2ef08832-b476-4255-b562-5aa266113f1f-kube-api-access-x5b5s\") pod \"collect-profiles-29483610-5g22r\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.506171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.630746 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:30:00 crc kubenswrapper[4823]: I0121 17:30:00.630819 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:30:01 crc kubenswrapper[4823]: I0121 17:30:01.063483 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r"] Jan 21 17:30:01 crc kubenswrapper[4823]: I0121 17:30:01.608259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" event={"ID":"2ef08832-b476-4255-b562-5aa266113f1f","Type":"ContainerStarted","Data":"417fe315e01b2b0477b7d8db5d5c45da57a0c21309cb12c62337b2090cfacb0e"} Jan 21 17:30:01 crc kubenswrapper[4823]: I0121 17:30:01.692207 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4pt7t" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="registry-server" probeResult="failure" output=< Jan 21 17:30:01 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 21 17:30:01 crc kubenswrapper[4823]: > Jan 21 17:30:02 crc kubenswrapper[4823]: I0121 17:30:02.617865 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" event={"ID":"2ef08832-b476-4255-b562-5aa266113f1f","Type":"ContainerDied","Data":"b1735f77a7f985a5e61b511d0e1b76fa1867ef62e35a2c9b45649a452a43a8dc"} Jan 21 17:30:02 crc kubenswrapper[4823]: I0121 17:30:02.617758 4823 generic.go:334] "Generic (PLEG): container finished" podID="2ef08832-b476-4255-b562-5aa266113f1f" containerID="b1735f77a7f985a5e61b511d0e1b76fa1867ef62e35a2c9b45649a452a43a8dc" exitCode=0 Jan 21 17:30:03 crc kubenswrapper[4823]: I0121 17:30:03.931114 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.098618 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5b5s\" (UniqueName: \"kubernetes.io/projected/2ef08832-b476-4255-b562-5aa266113f1f-kube-api-access-x5b5s\") pod \"2ef08832-b476-4255-b562-5aa266113f1f\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.098804 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef08832-b476-4255-b562-5aa266113f1f-secret-volume\") pod \"2ef08832-b476-4255-b562-5aa266113f1f\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.098886 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef08832-b476-4255-b562-5aa266113f1f-config-volume\") pod \"2ef08832-b476-4255-b562-5aa266113f1f\" (UID: \"2ef08832-b476-4255-b562-5aa266113f1f\") " Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.099823 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef08832-b476-4255-b562-5aa266113f1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ef08832-b476-4255-b562-5aa266113f1f" (UID: "2ef08832-b476-4255-b562-5aa266113f1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.107058 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef08832-b476-4255-b562-5aa266113f1f-kube-api-access-x5b5s" (OuterVolumeSpecName: "kube-api-access-x5b5s") pod "2ef08832-b476-4255-b562-5aa266113f1f" (UID: "2ef08832-b476-4255-b562-5aa266113f1f"). InnerVolumeSpecName "kube-api-access-x5b5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.110168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef08832-b476-4255-b562-5aa266113f1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2ef08832-b476-4255-b562-5aa266113f1f" (UID: "2ef08832-b476-4255-b562-5aa266113f1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.200962 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef08832-b476-4255-b562-5aa266113f1f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.201009 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef08832-b476-4255-b562-5aa266113f1f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.201035 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5b5s\" (UniqueName: \"kubernetes.io/projected/2ef08832-b476-4255-b562-5aa266113f1f-kube-api-access-x5b5s\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.633941 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" event={"ID":"2ef08832-b476-4255-b562-5aa266113f1f","Type":"ContainerDied","Data":"417fe315e01b2b0477b7d8db5d5c45da57a0c21309cb12c62337b2090cfacb0e"} Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.634009 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417fe315e01b2b0477b7d8db5d5c45da57a0c21309cb12c62337b2090cfacb0e" Jan 21 17:30:04 crc kubenswrapper[4823]: I0121 17:30:04.634056 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.286203 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55"] Jan 21 17:30:05 crc kubenswrapper[4823]: E0121 17:30:05.288477 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef08832-b476-4255-b562-5aa266113f1f" containerName="collect-profiles" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.288588 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef08832-b476-4255-b562-5aa266113f1f" containerName="collect-profiles" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.288797 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef08832-b476-4255-b562-5aa266113f1f" containerName="collect-profiles" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.289655 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.293405 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.293909 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.294522 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fv48f" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.294824 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.298329 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.317194 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55"] Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.418689 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed67552e-2644-4592-be91-31fb5ad23152-webhook-cert\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.418785 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltp7r\" (UniqueName: \"kubernetes.io/projected/ed67552e-2644-4592-be91-31fb5ad23152-kube-api-access-ltp7r\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.418851 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed67552e-2644-4592-be91-31fb5ad23152-apiservice-cert\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.521133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed67552e-2644-4592-be91-31fb5ad23152-webhook-cert\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.521219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltp7r\" (UniqueName: \"kubernetes.io/projected/ed67552e-2644-4592-be91-31fb5ad23152-kube-api-access-ltp7r\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.522315 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed67552e-2644-4592-be91-31fb5ad23152-apiservice-cert\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.528778 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed67552e-2644-4592-be91-31fb5ad23152-apiservice-cert\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.533779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed67552e-2644-4592-be91-31fb5ad23152-webhook-cert\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.554690 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltp7r\" (UniqueName: \"kubernetes.io/projected/ed67552e-2644-4592-be91-31fb5ad23152-kube-api-access-ltp7r\") pod \"metallb-operator-controller-manager-7b84d84c8d-gvj55\" (UID: \"ed67552e-2644-4592-be91-31fb5ad23152\") " pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.564718 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh"] Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.565924 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.569057 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.569717 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.570088 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-q2rt2" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.591461 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh"] Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.612470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.725006 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db56558d-45f7-4053-b14f-fb6c4ad56f3e-apiservice-cert\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.725100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db56558d-45f7-4053-b14f-fb6c4ad56f3e-webhook-cert\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.725225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxks\" (UniqueName: \"kubernetes.io/projected/db56558d-45f7-4053-b14f-fb6c4ad56f3e-kube-api-access-phxks\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.827117 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db56558d-45f7-4053-b14f-fb6c4ad56f3e-apiservice-cert\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.827868 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db56558d-45f7-4053-b14f-fb6c4ad56f3e-webhook-cert\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.827961 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxks\" (UniqueName: \"kubernetes.io/projected/db56558d-45f7-4053-b14f-fb6c4ad56f3e-kube-api-access-phxks\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.836912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db56558d-45f7-4053-b14f-fb6c4ad56f3e-apiservice-cert\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.836976 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db56558d-45f7-4053-b14f-fb6c4ad56f3e-webhook-cert\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.856915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxks\" (UniqueName: \"kubernetes.io/projected/db56558d-45f7-4053-b14f-fb6c4ad56f3e-kube-api-access-phxks\") pod \"metallb-operator-webhook-server-6774cbb849-xcfsh\" (UID: \"db56558d-45f7-4053-b14f-fb6c4ad56f3e\") " pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.908835 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:05 crc kubenswrapper[4823]: W0121 17:30:05.937652 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded67552e_2644_4592_be91_31fb5ad23152.slice/crio-af0c98094428b629c5e813f4977418934426a20c2acbd125ed5daa67d480b68b WatchSource:0}: Error finding container af0c98094428b629c5e813f4977418934426a20c2acbd125ed5daa67d480b68b: Status 404 returned error can't find the container with id af0c98094428b629c5e813f4977418934426a20c2acbd125ed5daa67d480b68b Jan 21 17:30:05 crc kubenswrapper[4823]: I0121 17:30:05.938484 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55"] Jan 21 17:30:06 crc kubenswrapper[4823]: I0121 17:30:06.195756 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh"] Jan 21 17:30:06 crc kubenswrapper[4823]: W0121 17:30:06.199026 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb56558d_45f7_4053_b14f_fb6c4ad56f3e.slice/crio-3fd104d4c4bb7b93831359c13df40c87bec2e81780ddde1af5194d2cbeaacb0e WatchSource:0}: Error finding container 3fd104d4c4bb7b93831359c13df40c87bec2e81780ddde1af5194d2cbeaacb0e: Status 404 returned error can't find the container with id 3fd104d4c4bb7b93831359c13df40c87bec2e81780ddde1af5194d2cbeaacb0e Jan 21 17:30:06 crc kubenswrapper[4823]: I0121 17:30:06.657487 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" event={"ID":"ed67552e-2644-4592-be91-31fb5ad23152","Type":"ContainerStarted","Data":"af0c98094428b629c5e813f4977418934426a20c2acbd125ed5daa67d480b68b"} Jan 21 17:30:06 crc kubenswrapper[4823]: I0121 17:30:06.659132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" event={"ID":"db56558d-45f7-4053-b14f-fb6c4ad56f3e","Type":"ContainerStarted","Data":"3fd104d4c4bb7b93831359c13df40c87bec2e81780ddde1af5194d2cbeaacb0e"} Jan 21 17:30:10 crc kubenswrapper[4823]: I0121 17:30:10.684961 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:30:10 crc kubenswrapper[4823]: I0121 17:30:10.740734 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:30:10 crc kubenswrapper[4823]: I0121 17:30:10.935181 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4pt7t"] Jan 21 17:30:12 crc kubenswrapper[4823]: I0121 17:30:12.724495 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4pt7t" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="registry-server" containerID="cri-o://11f3685c9be16e9d965b1b48489852f2bb9dfa583b7ca57932d4db988e2a4e04" gracePeriod=2 Jan 21 17:30:13 crc kubenswrapper[4823]: I0121 17:30:13.737733 4823 generic.go:334] "Generic (PLEG): container finished" podID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerID="11f3685c9be16e9d965b1b48489852f2bb9dfa583b7ca57932d4db988e2a4e04" exitCode=0 Jan 21 17:30:13 crc kubenswrapper[4823]: I0121 17:30:13.737925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerDied","Data":"11f3685c9be16e9d965b1b48489852f2bb9dfa583b7ca57932d4db988e2a4e04"} Jan 21 17:30:13 crc kubenswrapper[4823]: I0121 17:30:13.904338 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.075764 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-catalog-content\") pod \"e0fd8618-b533-4885-a2c7-7782d74ba56f\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.075956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-utilities\") pod \"e0fd8618-b533-4885-a2c7-7782d74ba56f\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.075982 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtxwm\" (UniqueName: \"kubernetes.io/projected/e0fd8618-b533-4885-a2c7-7782d74ba56f-kube-api-access-rtxwm\") pod \"e0fd8618-b533-4885-a2c7-7782d74ba56f\" (UID: \"e0fd8618-b533-4885-a2c7-7782d74ba56f\") " Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.077041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-utilities" (OuterVolumeSpecName: "utilities") pod "e0fd8618-b533-4885-a2c7-7782d74ba56f" (UID: "e0fd8618-b533-4885-a2c7-7782d74ba56f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.084213 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fd8618-b533-4885-a2c7-7782d74ba56f-kube-api-access-rtxwm" (OuterVolumeSpecName: "kube-api-access-rtxwm") pod "e0fd8618-b533-4885-a2c7-7782d74ba56f" (UID: "e0fd8618-b533-4885-a2c7-7782d74ba56f"). InnerVolumeSpecName "kube-api-access-rtxwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.177481 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.177529 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtxwm\" (UniqueName: \"kubernetes.io/projected/e0fd8618-b533-4885-a2c7-7782d74ba56f-kube-api-access-rtxwm\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.196711 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0fd8618-b533-4885-a2c7-7782d74ba56f" (UID: "e0fd8618-b533-4885-a2c7-7782d74ba56f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.278760 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fd8618-b533-4885-a2c7-7782d74ba56f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.748153 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pt7t" event={"ID":"e0fd8618-b533-4885-a2c7-7782d74ba56f","Type":"ContainerDied","Data":"c8c9a5cb1d228a39c04696ea38298cd93cd9bf9a9496775c667f4c54a508284d"} Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.748221 4823 scope.go:117] "RemoveContainer" containerID="11f3685c9be16e9d965b1b48489852f2bb9dfa583b7ca57932d4db988e2a4e04" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.748373 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pt7t" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.750528 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" event={"ID":"db56558d-45f7-4053-b14f-fb6c4ad56f3e","Type":"ContainerStarted","Data":"28974ad6ee75cb7fced18322b6fdc2c708652342bbb876d7e0098084d606cd03"} Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.751443 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.753912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" event={"ID":"ed67552e-2644-4592-be91-31fb5ad23152","Type":"ContainerStarted","Data":"d701e31e65be6d7116c195c60e27d7e3d3c0f8c2fd1f75be47a5a29d49125087"} Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.754417 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.784313 4823 scope.go:117] "RemoveContainer" containerID="ee0f0486fe12400ce7d7c80e6ef5de2017566864616bed491ba287855d470f9e" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.799566 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" podStartSLOduration=2.301999693 podStartE2EDuration="9.799535157s" podCreationTimestamp="2026-01-21 17:30:05 +0000 UTC" firstStartedPulling="2026-01-21 17:30:06.201933315 +0000 UTC m=+807.128064185" lastFinishedPulling="2026-01-21 17:30:13.699468789 +0000 UTC m=+814.625599649" observedRunningTime="2026-01-21 17:30:14.785674572 +0000 UTC m=+815.711805452" watchObservedRunningTime="2026-01-21 17:30:14.799535157 +0000 UTC m=+815.725666017" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.803934 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4pt7t"] Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.818342 4823 scope.go:117] "RemoveContainer" containerID="a64da2b9d00c5a067bf42dd11fecf431dda6fff18e112f586e4289aded070c0b" Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.823964 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4pt7t"] Jan 21 17:30:14 crc kubenswrapper[4823]: I0121 17:30:14.842685 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" podStartSLOduration=2.119665383 podStartE2EDuration="9.842661581s" podCreationTimestamp="2026-01-21 17:30:05 +0000 UTC" firstStartedPulling="2026-01-21 17:30:05.943652504 +0000 UTC m=+806.869783364" lastFinishedPulling="2026-01-21 17:30:13.666648702 +0000 UTC m=+814.592779562" observedRunningTime="2026-01-21 17:30:14.836894898 +0000 UTC m=+815.763025758" watchObservedRunningTime="2026-01-21 17:30:14.842661581 +0000 UTC m=+815.768792441" Jan 21 17:30:15 crc kubenswrapper[4823]: I0121 17:30:15.350644 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" path="/var/lib/kubelet/pods/e0fd8618-b533-4885-a2c7-7782d74ba56f/volumes" Jan 21 17:30:25 crc kubenswrapper[4823]: I0121 17:30:25.926658 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6774cbb849-xcfsh" Jan 21 17:30:45 crc kubenswrapper[4823]: I0121 17:30:45.616190 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7b84d84c8d-gvj55" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.384569 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4"] Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.384894 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="registry-server" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.384906 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="registry-server" Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.384923 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="extract-content" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.384929 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="extract-content" Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.384939 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="extract-utilities" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.384946 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="extract-utilities" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.385065 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fd8618-b533-4885-a2c7-7782d74ba56f" containerName="registry-server" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.385595 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.388510 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.393073 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zf8s8"] Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.396424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.398199 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.399128 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.404304 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4vz7j" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.453745 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4"] Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473368 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-frr-conf\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d053092-d968-421e-8413-7366fb2d5350-metrics-certs\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3365a322-818c-4bc8-b15f-fb77d81d76ee-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zkrs4\" (UID: \"3365a322-818c-4bc8-b15f-fb77d81d76ee\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473477 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5h9l\" (UniqueName: \"kubernetes.io/projected/3365a322-818c-4bc8-b15f-fb77d81d76ee-kube-api-access-m5h9l\") pod \"frr-k8s-webhook-server-7df86c4f6c-zkrs4\" (UID: \"3365a322-818c-4bc8-b15f-fb77d81d76ee\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473509 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cngv\" (UniqueName: \"kubernetes.io/projected/8d053092-d968-421e-8413-7366fb2d5350-kube-api-access-2cngv\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-metrics\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-frr-sockets\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473794 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-reloader\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.473865 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8d053092-d968-421e-8413-7366fb2d5350-frr-startup\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.482475 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v7dzb"] Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.484090 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.492904 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.493998 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.494255 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.494898 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-45gxf" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.512275 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-xzv5z"] Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.513667 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.515636 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.536633 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xzv5z"] Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-reloader\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/146255fb-d7e1-463b-93a1-365c08693116-metrics-certs\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8d053092-d968-421e-8413-7366fb2d5350-frr-startup\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575754 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-frr-conf\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575781 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-metrics-certs\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vqv\" (UniqueName: \"kubernetes.io/projected/3be600b2-8746-4191-9e0c-e2007fa95890-kube-api-access-76vqv\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575841 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d053092-d968-421e-8413-7366fb2d5350-metrics-certs\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575888 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3365a322-818c-4bc8-b15f-fb77d81d76ee-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zkrs4\" (UID: \"3365a322-818c-4bc8-b15f-fb77d81d76ee\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5h9l\" (UniqueName: \"kubernetes.io/projected/3365a322-818c-4bc8-b15f-fb77d81d76ee-kube-api-access-m5h9l\") pod \"frr-k8s-webhook-server-7df86c4f6c-zkrs4\" (UID: \"3365a322-818c-4bc8-b15f-fb77d81d76ee\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575942 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cngv\" (UniqueName: \"kubernetes.io/projected/8d053092-d968-421e-8413-7366fb2d5350-kube-api-access-2cngv\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575970 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3be600b2-8746-4191-9e0c-e2007fa95890-metallb-excludel2\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.575990 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.576013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-metrics\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.576045 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6tr\" (UniqueName: \"kubernetes.io/projected/146255fb-d7e1-463b-93a1-365c08693116-kube-api-access-fv6tr\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.576062 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146255fb-d7e1-463b-93a1-365c08693116-cert\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.576084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-frr-sockets\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.576605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-frr-sockets\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.577059 4823 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.577148 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d053092-d968-421e-8413-7366fb2d5350-metrics-certs podName:8d053092-d968-421e-8413-7366fb2d5350 nodeName:}" failed. No retries permitted until 2026-01-21 17:30:47.077128202 +0000 UTC m=+848.003259062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d053092-d968-421e-8413-7366fb2d5350-metrics-certs") pod "frr-k8s-zf8s8" (UID: "8d053092-d968-421e-8413-7366fb2d5350") : secret "frr-k8s-certs-secret" not found Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.577465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-frr-conf\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.577573 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-metrics\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.577598 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8d053092-d968-421e-8413-7366fb2d5350-reloader\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.578271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8d053092-d968-421e-8413-7366fb2d5350-frr-startup\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.584036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3365a322-818c-4bc8-b15f-fb77d81d76ee-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zkrs4\" (UID: \"3365a322-818c-4bc8-b15f-fb77d81d76ee\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.595079 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5h9l\" (UniqueName: \"kubernetes.io/projected/3365a322-818c-4bc8-b15f-fb77d81d76ee-kube-api-access-m5h9l\") pod \"frr-k8s-webhook-server-7df86c4f6c-zkrs4\" (UID: \"3365a322-818c-4bc8-b15f-fb77d81d76ee\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.599736 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cngv\" (UniqueName: \"kubernetes.io/projected/8d053092-d968-421e-8413-7366fb2d5350-kube-api-access-2cngv\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.676608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3be600b2-8746-4191-9e0c-e2007fa95890-metallb-excludel2\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.677324 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.677453 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.677558 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3be600b2-8746-4191-9e0c-e2007fa95890-metallb-excludel2\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.678108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6tr\" (UniqueName: \"kubernetes.io/projected/146255fb-d7e1-463b-93a1-365c08693116-kube-api-access-fv6tr\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: E0121 17:30:46.678257 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist podName:3be600b2-8746-4191-9e0c-e2007fa95890 nodeName:}" failed. No retries permitted until 2026-01-21 17:30:47.178231593 +0000 UTC m=+848.104362453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist") pod "speaker-v7dzb" (UID: "3be600b2-8746-4191-9e0c-e2007fa95890") : secret "metallb-memberlist" not found Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.678428 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146255fb-d7e1-463b-93a1-365c08693116-cert\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.678673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/146255fb-d7e1-463b-93a1-365c08693116-metrics-certs\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.678811 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-metrics-certs\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.678945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76vqv\" (UniqueName: \"kubernetes.io/projected/3be600b2-8746-4191-9e0c-e2007fa95890-kube-api-access-76vqv\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.680838 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.683036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-metrics-certs\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.685448 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/146255fb-d7e1-463b-93a1-365c08693116-metrics-certs\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.692738 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146255fb-d7e1-463b-93a1-365c08693116-cert\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.696995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76vqv\" (UniqueName: \"kubernetes.io/projected/3be600b2-8746-4191-9e0c-e2007fa95890-kube-api-access-76vqv\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.699521 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6tr\" (UniqueName: \"kubernetes.io/projected/146255fb-d7e1-463b-93a1-365c08693116-kube-api-access-fv6tr\") pod \"controller-6968d8fdc4-xzv5z\" (UID: \"146255fb-d7e1-463b-93a1-365c08693116\") " pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.710350 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:46 crc kubenswrapper[4823]: I0121 17:30:46.830760 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.055451 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xzv5z"] Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.087205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d053092-d968-421e-8413-7366fb2d5350-metrics-certs\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.094090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d053092-d968-421e-8413-7366fb2d5350-metrics-certs\") pod \"frr-k8s-zf8s8\" (UID: \"8d053092-d968-421e-8413-7366fb2d5350\") " pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.142009 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4"] Jan 21 17:30:47 crc kubenswrapper[4823]: W0121 17:30:47.160135 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3365a322_818c_4bc8_b15f_fb77d81d76ee.slice/crio-8f7a78048433ae6bb708b2549fd798e3676db0c13390a00bd4a9c63e8a6d0553 WatchSource:0}: Error finding container 8f7a78048433ae6bb708b2549fd798e3676db0c13390a00bd4a9c63e8a6d0553: Status 404 returned error can't find the container with id 8f7a78048433ae6bb708b2549fd798e3676db0c13390a00bd4a9c63e8a6d0553 Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.187999 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:47 crc kubenswrapper[4823]: E0121 17:30:47.188164 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 17:30:47 crc kubenswrapper[4823]: E0121 17:30:47.188238 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist podName:3be600b2-8746-4191-9e0c-e2007fa95890 nodeName:}" failed. No retries permitted until 2026-01-21 17:30:48.188218588 +0000 UTC m=+849.114349448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist") pod "speaker-v7dzb" (UID: "3be600b2-8746-4191-9e0c-e2007fa95890") : secret "metallb-memberlist" not found Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.321584 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.996605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xzv5z" event={"ID":"146255fb-d7e1-463b-93a1-365c08693116","Type":"ContainerStarted","Data":"c3e68d96c86f797a8a144cb8f635850c4781a30e515e74ab84384b93eb9224a1"} Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.996986 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xzv5z" event={"ID":"146255fb-d7e1-463b-93a1-365c08693116","Type":"ContainerStarted","Data":"d42ad8b007d1e53cc08814ca47b91efc7494c4c440538ca9b3590cbcbe6dd88c"} Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.997045 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xzv5z" event={"ID":"146255fb-d7e1-463b-93a1-365c08693116","Type":"ContainerStarted","Data":"d34d780283bdf18927ef789595b4cc0f0fdc346ee16d5640bf478f2f1261bcef"} Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.998279 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:30:47 crc kubenswrapper[4823]: I0121 17:30:47.999937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"fb61f35ad35c44fc0b1cf5e0817ae33bad9c4317448a7f056098f27a77177d24"} Jan 21 17:30:48 crc kubenswrapper[4823]: I0121 17:30:48.001429 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" event={"ID":"3365a322-818c-4bc8-b15f-fb77d81d76ee","Type":"ContainerStarted","Data":"8f7a78048433ae6bb708b2549fd798e3676db0c13390a00bd4a9c63e8a6d0553"} Jan 21 17:30:48 crc kubenswrapper[4823]: I0121 17:30:48.027381 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-xzv5z" podStartSLOduration=2.027353816 podStartE2EDuration="2.027353816s" podCreationTimestamp="2026-01-21 17:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:30:48.017451191 +0000 UTC m=+848.943582091" watchObservedRunningTime="2026-01-21 17:30:48.027353816 +0000 UTC m=+848.953484676" Jan 21 17:30:48 crc kubenswrapper[4823]: I0121 17:30:48.205583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:48 crc kubenswrapper[4823]: I0121 17:30:48.212268 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3be600b2-8746-4191-9e0c-e2007fa95890-memberlist\") pod \"speaker-v7dzb\" (UID: \"3be600b2-8746-4191-9e0c-e2007fa95890\") " pod="metallb-system/speaker-v7dzb" Jan 21 17:30:48 crc kubenswrapper[4823]: I0121 17:30:48.306949 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v7dzb" Jan 21 17:30:48 crc kubenswrapper[4823]: W0121 17:30:48.333930 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3be600b2_8746_4191_9e0c_e2007fa95890.slice/crio-7cc763046b1b6ff76c28c77e1c623fdcefdc0d2c3b0e3840a7a883e4fac1265b WatchSource:0}: Error finding container 7cc763046b1b6ff76c28c77e1c623fdcefdc0d2c3b0e3840a7a883e4fac1265b: Status 404 returned error can't find the container with id 7cc763046b1b6ff76c28c77e1c623fdcefdc0d2c3b0e3840a7a883e4fac1265b Jan 21 17:30:49 crc kubenswrapper[4823]: I0121 17:30:49.014984 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v7dzb" event={"ID":"3be600b2-8746-4191-9e0c-e2007fa95890","Type":"ContainerStarted","Data":"58b787c69433166821f3624eec581f63dd2875e8f45d6105bb950e20d7ac5986"} Jan 21 17:30:49 crc kubenswrapper[4823]: I0121 17:30:49.015375 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v7dzb" event={"ID":"3be600b2-8746-4191-9e0c-e2007fa95890","Type":"ContainerStarted","Data":"43ee3209e3a2d68bfad4ca53a9f32b0ad8baf5adcbfd268ac115b2b3bb7af41e"} Jan 21 17:30:49 crc kubenswrapper[4823]: I0121 17:30:49.015396 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v7dzb" event={"ID":"3be600b2-8746-4191-9e0c-e2007fa95890","Type":"ContainerStarted","Data":"7cc763046b1b6ff76c28c77e1c623fdcefdc0d2c3b0e3840a7a883e4fac1265b"} Jan 21 17:30:49 crc kubenswrapper[4823]: I0121 17:30:49.015610 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v7dzb" Jan 21 17:30:49 crc kubenswrapper[4823]: I0121 17:30:49.042225 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v7dzb" podStartSLOduration=3.04220404 podStartE2EDuration="3.04220404s" podCreationTimestamp="2026-01-21 17:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:30:49.041203316 +0000 UTC m=+849.967334186" watchObservedRunningTime="2026-01-21 17:30:49.04220404 +0000 UTC m=+849.968334890" Jan 21 17:30:56 crc kubenswrapper[4823]: I0121 17:30:56.089154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" event={"ID":"3365a322-818c-4bc8-b15f-fb77d81d76ee","Type":"ContainerStarted","Data":"9abf72f8f2bcd3341998da5be294509f142c8cd942044efb2604cfeb1f610222"} Jan 21 17:30:56 crc kubenswrapper[4823]: I0121 17:30:56.089814 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:30:56 crc kubenswrapper[4823]: I0121 17:30:56.093376 4823 generic.go:334] "Generic (PLEG): container finished" podID="8d053092-d968-421e-8413-7366fb2d5350" containerID="a227dd7e3c4abc6d558ee43381e471758afe955e84307e5798ce21433b944842" exitCode=0 Jan 21 17:30:56 crc kubenswrapper[4823]: I0121 17:30:56.093434 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerDied","Data":"a227dd7e3c4abc6d558ee43381e471758afe955e84307e5798ce21433b944842"} Jan 21 17:30:56 crc kubenswrapper[4823]: I0121 17:30:56.113714 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" podStartSLOduration=2.312209233 podStartE2EDuration="10.113690485s" podCreationTimestamp="2026-01-21 17:30:46 +0000 UTC" firstStartedPulling="2026-01-21 17:30:47.163090237 +0000 UTC m=+848.089221087" lastFinishedPulling="2026-01-21 17:30:54.964571479 +0000 UTC m=+855.890702339" observedRunningTime="2026-01-21 17:30:56.109761237 +0000 UTC m=+857.035892127" watchObservedRunningTime="2026-01-21 17:30:56.113690485 +0000 UTC m=+857.039821345" Jan 21 17:30:57 crc kubenswrapper[4823]: I0121 17:30:57.106959 4823 generic.go:334] "Generic (PLEG): container finished" podID="8d053092-d968-421e-8413-7366fb2d5350" containerID="7379d81e4179a0bbdf7c0c01949ca271f5de5ddb4a2fa9d50a4210a37ec179c0" exitCode=0 Jan 21 17:30:57 crc kubenswrapper[4823]: I0121 17:30:57.107063 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerDied","Data":"7379d81e4179a0bbdf7c0c01949ca271f5de5ddb4a2fa9d50a4210a37ec179c0"} Jan 21 17:30:58 crc kubenswrapper[4823]: I0121 17:30:58.115603 4823 generic.go:334] "Generic (PLEG): container finished" podID="8d053092-d968-421e-8413-7366fb2d5350" containerID="2815d359f6fc495e560874bd3204c0d7d2e6004cacf81c4e542b848b8600da1d" exitCode=0 Jan 21 17:30:58 crc kubenswrapper[4823]: I0121 17:30:58.115664 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerDied","Data":"2815d359f6fc495e560874bd3204c0d7d2e6004cacf81c4e542b848b8600da1d"} Jan 21 17:30:58 crc kubenswrapper[4823]: I0121 17:30:58.312997 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v7dzb" Jan 21 17:30:59 crc kubenswrapper[4823]: I0121 17:30:59.125125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"95844287ff9cc6f997ac9a3e836011a68e09bf68c8bf069e893048b3c8d31e86"} Jan 21 17:31:00 crc kubenswrapper[4823]: I0121 17:31:00.150959 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"f9ef9538150ed7b91b64df903d7ee822594d5c156c4f600071d4d92560b246b1"} Jan 21 17:31:00 crc kubenswrapper[4823]: I0121 17:31:00.151436 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"fd549cb85c28c572c8df6814992ff7effd5cd032c8e9ba413589cdb0b61313cb"} Jan 21 17:31:00 crc kubenswrapper[4823]: I0121 17:31:00.151453 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"a416dfd9461b451542d0d54c7e24e6d779770f6020fda4d64de3624d9a645da3"} Jan 21 17:31:00 crc kubenswrapper[4823]: I0121 17:31:00.151466 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"b7ac32b79c96430db2d37ed013651269953186c39dd2ec1cded956307a9c99d4"} Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.165043 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zf8s8" event={"ID":"8d053092-d968-421e-8413-7366fb2d5350","Type":"ContainerStarted","Data":"ab2a6f0a465f948a450c34cdb64e64a77b2ef26d7c7d94c1c8ba891836c0321c"} Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.165587 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.199779 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zf8s8" podStartSLOduration=7.651668212 podStartE2EDuration="15.199748166s" podCreationTimestamp="2026-01-21 17:30:46 +0000 UTC" firstStartedPulling="2026-01-21 17:30:47.436784917 +0000 UTC m=+848.362915787" lastFinishedPulling="2026-01-21 17:30:54.984864881 +0000 UTC m=+855.910995741" observedRunningTime="2026-01-21 17:31:01.194425534 +0000 UTC m=+862.120556464" watchObservedRunningTime="2026-01-21 17:31:01.199748166 +0000 UTC m=+862.125879066" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.501617 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-957x5"] Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.502606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.508316 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.510333 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.511879 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-pfk5z" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.521885 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-957x5"] Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.623679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bkd\" (UniqueName: \"kubernetes.io/projected/f72e73ab-c822-4d83-a00a-e97549b7379a-kube-api-access-s8bkd\") pod \"openstack-operator-index-957x5\" (UID: \"f72e73ab-c822-4d83-a00a-e97549b7379a\") " pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.725586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8bkd\" (UniqueName: \"kubernetes.io/projected/f72e73ab-c822-4d83-a00a-e97549b7379a-kube-api-access-s8bkd\") pod \"openstack-operator-index-957x5\" (UID: \"f72e73ab-c822-4d83-a00a-e97549b7379a\") " pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.749168 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8bkd\" (UniqueName: \"kubernetes.io/projected/f72e73ab-c822-4d83-a00a-e97549b7379a-kube-api-access-s8bkd\") pod \"openstack-operator-index-957x5\" (UID: \"f72e73ab-c822-4d83-a00a-e97549b7379a\") " pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:01 crc kubenswrapper[4823]: I0121 17:31:01.823923 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:02 crc kubenswrapper[4823]: I0121 17:31:02.185162 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-957x5"] Jan 21 17:31:02 crc kubenswrapper[4823]: I0121 17:31:02.322930 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:31:02 crc kubenswrapper[4823]: I0121 17:31:02.365674 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:31:03 crc kubenswrapper[4823]: I0121 17:31:03.182975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-957x5" event={"ID":"f72e73ab-c822-4d83-a00a-e97549b7379a","Type":"ContainerStarted","Data":"63cf424e487b3e268c4abc10370bc18b3ea55725c7ade12465fee69fba49d1c3"} Jan 21 17:31:04 crc kubenswrapper[4823]: I0121 17:31:04.876977 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-957x5"] Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.213037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-957x5" event={"ID":"f72e73ab-c822-4d83-a00a-e97549b7379a","Type":"ContainerStarted","Data":"fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f"} Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.239509 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-957x5" podStartSLOduration=1.740281854 podStartE2EDuration="4.239480995s" podCreationTimestamp="2026-01-21 17:31:01 +0000 UTC" firstStartedPulling="2026-01-21 17:31:02.198920112 +0000 UTC m=+863.125050972" lastFinishedPulling="2026-01-21 17:31:04.698119253 +0000 UTC m=+865.624250113" observedRunningTime="2026-01-21 17:31:05.236236845 +0000 UTC m=+866.162367715" watchObservedRunningTime="2026-01-21 17:31:05.239480995 +0000 UTC m=+866.165611855" Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.485057 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x8vd6"] Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.485900 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.487162 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbw6\" (UniqueName: \"kubernetes.io/projected/5a7085d9-6b10-4bfb-a9ca-30f9ea82081d-kube-api-access-6cbw6\") pod \"openstack-operator-index-x8vd6\" (UID: \"5a7085d9-6b10-4bfb-a9ca-30f9ea82081d\") " pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.498808 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x8vd6"] Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.588239 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbw6\" (UniqueName: \"kubernetes.io/projected/5a7085d9-6b10-4bfb-a9ca-30f9ea82081d-kube-api-access-6cbw6\") pod \"openstack-operator-index-x8vd6\" (UID: \"5a7085d9-6b10-4bfb-a9ca-30f9ea82081d\") " pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.612487 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbw6\" (UniqueName: \"kubernetes.io/projected/5a7085d9-6b10-4bfb-a9ca-30f9ea82081d-kube-api-access-6cbw6\") pod \"openstack-operator-index-x8vd6\" (UID: \"5a7085d9-6b10-4bfb-a9ca-30f9ea82081d\") " pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:05 crc kubenswrapper[4823]: I0121 17:31:05.806527 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.220603 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-957x5" podUID="f72e73ab-c822-4d83-a00a-e97549b7379a" containerName="registry-server" containerID="cri-o://fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f" gracePeriod=2 Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.256110 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x8vd6"] Jan 21 17:31:06 crc kubenswrapper[4823]: W0121 17:31:06.267781 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a7085d9_6b10_4bfb_a9ca_30f9ea82081d.slice/crio-c6063e5c33d08cd167aaafc8c3d3bb1f067abd7992d65e18d7fe3010f02cd277 WatchSource:0}: Error finding container c6063e5c33d08cd167aaafc8c3d3bb1f067abd7992d65e18d7fe3010f02cd277: Status 404 returned error can't find the container with id c6063e5c33d08cd167aaafc8c3d3bb1f067abd7992d65e18d7fe3010f02cd277 Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.582009 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.705680 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8bkd\" (UniqueName: \"kubernetes.io/projected/f72e73ab-c822-4d83-a00a-e97549b7379a-kube-api-access-s8bkd\") pod \"f72e73ab-c822-4d83-a00a-e97549b7379a\" (UID: \"f72e73ab-c822-4d83-a00a-e97549b7379a\") " Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.714374 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72e73ab-c822-4d83-a00a-e97549b7379a-kube-api-access-s8bkd" (OuterVolumeSpecName: "kube-api-access-s8bkd") pod "f72e73ab-c822-4d83-a00a-e97549b7379a" (UID: "f72e73ab-c822-4d83-a00a-e97549b7379a"). InnerVolumeSpecName "kube-api-access-s8bkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.715799 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zkrs4" Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.807770 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8bkd\" (UniqueName: \"kubernetes.io/projected/f72e73ab-c822-4d83-a00a-e97549b7379a-kube-api-access-s8bkd\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:06 crc kubenswrapper[4823]: I0121 17:31:06.836337 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-xzv5z" Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.229235 4823 generic.go:334] "Generic (PLEG): container finished" podID="f72e73ab-c822-4d83-a00a-e97549b7379a" containerID="fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f" exitCode=0 Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.229290 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-957x5" Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.229308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-957x5" event={"ID":"f72e73ab-c822-4d83-a00a-e97549b7379a","Type":"ContainerDied","Data":"fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f"} Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.229388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-957x5" event={"ID":"f72e73ab-c822-4d83-a00a-e97549b7379a","Type":"ContainerDied","Data":"63cf424e487b3e268c4abc10370bc18b3ea55725c7ade12465fee69fba49d1c3"} Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.229417 4823 scope.go:117] "RemoveContainer" containerID="fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f" Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.231926 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x8vd6" event={"ID":"5a7085d9-6b10-4bfb-a9ca-30f9ea82081d","Type":"ContainerStarted","Data":"be55fb8dfafa84d71f55b462a552426e169ab06678bcefdd9ae6984065867652"} Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.231970 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x8vd6" event={"ID":"5a7085d9-6b10-4bfb-a9ca-30f9ea82081d","Type":"ContainerStarted","Data":"c6063e5c33d08cd167aaafc8c3d3bb1f067abd7992d65e18d7fe3010f02cd277"} Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.249549 4823 scope.go:117] "RemoveContainer" containerID="fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f" Jan 21 17:31:07 crc kubenswrapper[4823]: E0121 17:31:07.250735 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f\": container with ID starting with fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f not found: ID does not exist" containerID="fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f" Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.251028 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f"} err="failed to get container status \"fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f\": rpc error: code = NotFound desc = could not find container \"fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f\": container with ID starting with fb35d6192a0860820567b6237c66bc339aa8648f5ade9fcf074d50e4c1c3a65f not found: ID does not exist" Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.254722 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x8vd6" podStartSLOduration=2.202918403 podStartE2EDuration="2.254693734s" podCreationTimestamp="2026-01-21 17:31:05 +0000 UTC" firstStartedPulling="2026-01-21 17:31:06.272034267 +0000 UTC m=+867.198165127" lastFinishedPulling="2026-01-21 17:31:06.323809598 +0000 UTC m=+867.249940458" observedRunningTime="2026-01-21 17:31:07.253609257 +0000 UTC m=+868.179740147" watchObservedRunningTime="2026-01-21 17:31:07.254693734 +0000 UTC m=+868.180824594" Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.273753 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-957x5"] Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.278066 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-957x5"] Jan 21 17:31:07 crc kubenswrapper[4823]: I0121 17:31:07.353094 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f72e73ab-c822-4d83-a00a-e97549b7379a" path="/var/lib/kubelet/pods/f72e73ab-c822-4d83-a00a-e97549b7379a/volumes" Jan 21 17:31:15 crc kubenswrapper[4823]: I0121 17:31:15.807217 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:15 crc kubenswrapper[4823]: I0121 17:31:15.808088 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:15 crc kubenswrapper[4823]: I0121 17:31:15.858044 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:16 crc kubenswrapper[4823]: I0121 17:31:16.320238 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x8vd6" Jan 21 17:31:17 crc kubenswrapper[4823]: I0121 17:31:17.324762 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zf8s8" Jan 21 17:31:21 crc kubenswrapper[4823]: I0121 17:31:21.994144 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs"] Jan 21 17:31:21 crc kubenswrapper[4823]: E0121 17:31:21.994678 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72e73ab-c822-4d83-a00a-e97549b7379a" containerName="registry-server" Jan 21 17:31:21 crc kubenswrapper[4823]: I0121 17:31:21.994690 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72e73ab-c822-4d83-a00a-e97549b7379a" containerName="registry-server" Jan 21 17:31:21 crc kubenswrapper[4823]: I0121 17:31:21.994797 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72e73ab-c822-4d83-a00a-e97549b7379a" containerName="registry-server" Jan 21 17:31:21 crc kubenswrapper[4823]: I0121 17:31:21.995835 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:21 crc kubenswrapper[4823]: I0121 17:31:21.999372 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4q4lg" Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.010270 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs"] Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.154118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-util\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.154235 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-bundle\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.154357 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxthf\" (UniqueName: \"kubernetes.io/projected/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-kube-api-access-sxthf\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.255710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxthf\" (UniqueName: \"kubernetes.io/projected/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-kube-api-access-sxthf\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.255826 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-util\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:22 crc kubenswrapper[4823]: I0121 17:31:22.255891 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-bundle\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:26 crc kubenswrapper[4823]: I0121 17:31:22.256477 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-util\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:26 crc kubenswrapper[4823]: I0121 17:31:22.256735 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-bundle\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:26 crc kubenswrapper[4823]: I0121 17:31:22.274957 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxthf\" (UniqueName: \"kubernetes.io/projected/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-kube-api-access-sxthf\") pod \"ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:26 crc kubenswrapper[4823]: I0121 17:31:22.313424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:26 crc kubenswrapper[4823]: I0121 17:31:26.772371 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs"] Jan 21 17:31:27 crc kubenswrapper[4823]: I0121 17:31:27.387879 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" event={"ID":"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1","Type":"ContainerStarted","Data":"ad6526be407c27f9837cbf146f6ef035827d20071a976f618b44428fa5d821a7"} Jan 21 17:31:28 crc kubenswrapper[4823]: I0121 17:31:28.398166 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerID="9c48b2ff85d57cfe70998ac3bc4e22883fc6e4814a5b5217208ecec513c9c1fb" exitCode=0 Jan 21 17:31:28 crc kubenswrapper[4823]: I0121 17:31:28.398230 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" event={"ID":"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1","Type":"ContainerDied","Data":"9c48b2ff85d57cfe70998ac3bc4e22883fc6e4814a5b5217208ecec513c9c1fb"} Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.229252 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mzpk8"] Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.231212 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.244229 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzpk8"] Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.263240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcglv\" (UniqueName: \"kubernetes.io/projected/688964ad-c530-4b0f-adfa-cf41ec318dc8-kube-api-access-tcglv\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.263355 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-utilities\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.263394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-catalog-content\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.364656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcglv\" (UniqueName: \"kubernetes.io/projected/688964ad-c530-4b0f-adfa-cf41ec318dc8-kube-api-access-tcglv\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.364735 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-utilities\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.364768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-catalog-content\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.365302 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-catalog-content\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.366619 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-utilities\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.388471 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcglv\" (UniqueName: \"kubernetes.io/projected/688964ad-c530-4b0f-adfa-cf41ec318dc8-kube-api-access-tcglv\") pod \"community-operators-mzpk8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:29 crc kubenswrapper[4823]: I0121 17:31:29.554972 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:30 crc kubenswrapper[4823]: I0121 17:31:30.203703 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzpk8"] Jan 21 17:31:30 crc kubenswrapper[4823]: W0121 17:31:30.208733 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688964ad_c530_4b0f_adfa_cf41ec318dc8.slice/crio-b20ef61a8f40641501a4410410a4a60ad05366e23add858cc0df8caea373cdb7 WatchSource:0}: Error finding container b20ef61a8f40641501a4410410a4a60ad05366e23add858cc0df8caea373cdb7: Status 404 returned error can't find the container with id b20ef61a8f40641501a4410410a4a60ad05366e23add858cc0df8caea373cdb7 Jan 21 17:31:30 crc kubenswrapper[4823]: I0121 17:31:30.420820 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerID="50583d7b719510e64801d1d3213d0bbd05ff109870b5c6844fbac903a647d367" exitCode=0 Jan 21 17:31:30 crc kubenswrapper[4823]: I0121 17:31:30.420952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" event={"ID":"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1","Type":"ContainerDied","Data":"50583d7b719510e64801d1d3213d0bbd05ff109870b5c6844fbac903a647d367"} Jan 21 17:31:30 crc kubenswrapper[4823]: I0121 17:31:30.425903 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerStarted","Data":"d9d63ff7e9714b2f65780e97fd5fdd352bf855644aaf29f42161fe8f65778a67"} Jan 21 17:31:30 crc kubenswrapper[4823]: I0121 17:31:30.425976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerStarted","Data":"b20ef61a8f40641501a4410410a4a60ad05366e23add858cc0df8caea373cdb7"} Jan 21 17:31:31 crc kubenswrapper[4823]: I0121 17:31:31.435901 4823 generic.go:334] "Generic (PLEG): container finished" podID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerID="d9d63ff7e9714b2f65780e97fd5fdd352bf855644aaf29f42161fe8f65778a67" exitCode=0 Jan 21 17:31:31 crc kubenswrapper[4823]: I0121 17:31:31.435988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerDied","Data":"d9d63ff7e9714b2f65780e97fd5fdd352bf855644aaf29f42161fe8f65778a67"} Jan 21 17:31:31 crc kubenswrapper[4823]: I0121 17:31:31.439704 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerID="55ae5d825b60d2e9717c8b147f70f16419fb19c1878a15cf419e71ed04f34181" exitCode=0 Jan 21 17:31:31 crc kubenswrapper[4823]: I0121 17:31:31.439739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" event={"ID":"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1","Type":"ContainerDied","Data":"55ae5d825b60d2e9717c8b147f70f16419fb19c1878a15cf419e71ed04f34181"} Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.013803 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-krlwt"] Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.016103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.061436 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krlwt"] Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.208460 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-utilities\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.208839 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94cv\" (UniqueName: \"kubernetes.io/projected/3c43b34b-ca14-400e-b6f9-d45c53e11767-kube-api-access-x94cv\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.208975 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-catalog-content\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.309886 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-utilities\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.310224 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x94cv\" (UniqueName: \"kubernetes.io/projected/3c43b34b-ca14-400e-b6f9-d45c53e11767-kube-api-access-x94cv\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.310326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-catalog-content\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.310558 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-utilities\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.310953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-catalog-content\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.334142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x94cv\" (UniqueName: \"kubernetes.io/projected/3c43b34b-ca14-400e-b6f9-d45c53e11767-kube-api-access-x94cv\") pod \"certified-operators-krlwt\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.377514 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.748002 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.875728 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krlwt"] Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.922624 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-util\") pod \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.922705 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxthf\" (UniqueName: \"kubernetes.io/projected/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-kube-api-access-sxthf\") pod \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.922743 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-bundle\") pod \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\" (UID: \"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1\") " Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.923663 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-bundle" (OuterVolumeSpecName: "bundle") pod "d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" (UID: "d1c2ba5e-6779-4a6b-be72-0c26004cd2f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:32 crc kubenswrapper[4823]: I0121 17:31:32.929221 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-kube-api-access-sxthf" (OuterVolumeSpecName: "kube-api-access-sxthf") pod "d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" (UID: "d1c2ba5e-6779-4a6b-be72-0c26004cd2f1"). InnerVolumeSpecName "kube-api-access-sxthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.024976 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxthf\" (UniqueName: \"kubernetes.io/projected/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-kube-api-access-sxthf\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.025502 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.226688 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-util" (OuterVolumeSpecName: "util") pod "d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" (UID: "d1c2ba5e-6779-4a6b-be72-0c26004cd2f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.229349 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1c2ba5e-6779-4a6b-be72-0c26004cd2f1-util\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.456252 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerStarted","Data":"2efdf7b1999b6dcd489b018ed913722e5c3b1889b0487e6f97dd8bead2927201"} Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.459597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" event={"ID":"d1c2ba5e-6779-4a6b-be72-0c26004cd2f1","Type":"ContainerDied","Data":"ad6526be407c27f9837cbf146f6ef035827d20071a976f618b44428fa5d821a7"} Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.459655 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad6526be407c27f9837cbf146f6ef035827d20071a976f618b44428fa5d821a7" Jan 21 17:31:33 crc kubenswrapper[4823]: I0121 17:31:33.459729 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs" Jan 21 17:31:34 crc kubenswrapper[4823]: I0121 17:31:34.474558 4823 generic.go:334] "Generic (PLEG): container finished" podID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerID="30397206a0e0ea2724bfd8753e1c7f74ed7138abdacbc49faa502d027e7aca9f" exitCode=0 Jan 21 17:31:34 crc kubenswrapper[4823]: I0121 17:31:34.474649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerDied","Data":"30397206a0e0ea2724bfd8753e1c7f74ed7138abdacbc49faa502d027e7aca9f"} Jan 21 17:31:34 crc kubenswrapper[4823]: I0121 17:31:34.478107 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerID="e3b4359fbe0998bc4d653ce0119a0eece516b9968f8c2108561ddccb65364db0" exitCode=0 Jan 21 17:31:34 crc kubenswrapper[4823]: I0121 17:31:34.478147 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerDied","Data":"e3b4359fbe0998bc4d653ce0119a0eece516b9968f8c2108561ddccb65364db0"} Jan 21 17:31:35 crc kubenswrapper[4823]: I0121 17:31:35.487832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerStarted","Data":"2d8882a4518c7de9bb76b8d87ec01bff5bb5ecaf5bbad7ff1c7a1ab2c46da28c"} Jan 21 17:31:35 crc kubenswrapper[4823]: I0121 17:31:35.492267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerStarted","Data":"5584644f96c4242819aaa9d839298229f4b4523816244f6f3eb5ce6a5f982067"} Jan 21 17:31:35 crc kubenswrapper[4823]: I0121 17:31:35.535697 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mzpk8" podStartSLOduration=3.062639116 podStartE2EDuration="6.535669988s" podCreationTimestamp="2026-01-21 17:31:29 +0000 UTC" firstStartedPulling="2026-01-21 17:31:31.438019055 +0000 UTC m=+892.364149955" lastFinishedPulling="2026-01-21 17:31:34.911049967 +0000 UTC m=+895.837180827" observedRunningTime="2026-01-21 17:31:35.53250323 +0000 UTC m=+896.458634100" watchObservedRunningTime="2026-01-21 17:31:35.535669988 +0000 UTC m=+896.461800848" Jan 21 17:31:36 crc kubenswrapper[4823]: I0121 17:31:36.500945 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerID="2d8882a4518c7de9bb76b8d87ec01bff5bb5ecaf5bbad7ff1c7a1ab2c46da28c" exitCode=0 Jan 21 17:31:36 crc kubenswrapper[4823]: I0121 17:31:36.501065 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerDied","Data":"2d8882a4518c7de9bb76b8d87ec01bff5bb5ecaf5bbad7ff1c7a1ab2c46da28c"} Jan 21 17:31:37 crc kubenswrapper[4823]: I0121 17:31:37.511014 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerStarted","Data":"278c6dab78edd9dce0f380821501a58df3d314871c9e0ee095ef6c82da2eb91a"} Jan 21 17:31:37 crc kubenswrapper[4823]: I0121 17:31:37.553567 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-krlwt" podStartSLOduration=4.13510774 podStartE2EDuration="6.553533203s" podCreationTimestamp="2026-01-21 17:31:31 +0000 UTC" firstStartedPulling="2026-01-21 17:31:34.480109877 +0000 UTC m=+895.406240737" lastFinishedPulling="2026-01-21 17:31:36.89853534 +0000 UTC m=+897.824666200" observedRunningTime="2026-01-21 17:31:37.531925698 +0000 UTC m=+898.458056568" watchObservedRunningTime="2026-01-21 17:31:37.553533203 +0000 UTC m=+898.479664063" Jan 21 17:31:39 crc kubenswrapper[4823]: I0121 17:31:39.556905 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:39 crc kubenswrapper[4823]: I0121 17:31:39.556989 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:39 crc kubenswrapper[4823]: I0121 17:31:39.604934 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:40 crc kubenswrapper[4823]: I0121 17:31:40.580523 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.793396 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw"] Jan 21 17:31:41 crc kubenswrapper[4823]: E0121 17:31:41.793729 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="extract" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.793747 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="extract" Jan 21 17:31:41 crc kubenswrapper[4823]: E0121 17:31:41.793768 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="util" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.793774 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="util" Jan 21 17:31:41 crc kubenswrapper[4823]: E0121 17:31:41.793788 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="pull" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.793797 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="pull" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.793974 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c2ba5e-6779-4a6b-be72-0c26004cd2f1" containerName="extract" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.794502 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.797080 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9h2nm" Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.821784 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw"] Jan 21 17:31:41 crc kubenswrapper[4823]: I0121 17:31:41.961175 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lkxr\" (UniqueName: \"kubernetes.io/projected/a8052883-2f70-4dbf-b81c-2bdc0752c41e-kube-api-access-4lkxr\") pod \"openstack-operator-controller-init-65d788f684-rh8vw\" (UID: \"a8052883-2f70-4dbf-b81c-2bdc0752c41e\") " pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.063049 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lkxr\" (UniqueName: \"kubernetes.io/projected/a8052883-2f70-4dbf-b81c-2bdc0752c41e-kube-api-access-4lkxr\") pod \"openstack-operator-controller-init-65d788f684-rh8vw\" (UID: \"a8052883-2f70-4dbf-b81c-2bdc0752c41e\") " pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.087492 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lkxr\" (UniqueName: \"kubernetes.io/projected/a8052883-2f70-4dbf-b81c-2bdc0752c41e-kube-api-access-4lkxr\") pod \"openstack-operator-controller-init-65d788f684-rh8vw\" (UID: \"a8052883-2f70-4dbf-b81c-2bdc0752c41e\") " pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.111942 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.383125 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.383847 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.387635 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw"] Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.441136 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.546010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" event={"ID":"a8052883-2f70-4dbf-b81c-2bdc0752c41e","Type":"ContainerStarted","Data":"6fd28f93b6d0a5831035464b7b1a4dd2f9171239cce82b8d375c332aff9044f0"} Jan 21 17:31:42 crc kubenswrapper[4823]: I0121 17:31:42.590333 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:43 crc kubenswrapper[4823]: I0121 17:31:43.195233 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzpk8"] Jan 21 17:31:43 crc kubenswrapper[4823]: I0121 17:31:43.195560 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mzpk8" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="registry-server" containerID="cri-o://5584644f96c4242819aaa9d839298229f4b4523816244f6f3eb5ce6a5f982067" gracePeriod=2 Jan 21 17:31:43 crc kubenswrapper[4823]: I0121 17:31:43.596277 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krlwt"] Jan 21 17:31:44 crc kubenswrapper[4823]: I0121 17:31:44.575227 4823 generic.go:334] "Generic (PLEG): container finished" podID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerID="5584644f96c4242819aaa9d839298229f4b4523816244f6f3eb5ce6a5f982067" exitCode=0 Jan 21 17:31:44 crc kubenswrapper[4823]: I0121 17:31:44.575304 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerDied","Data":"5584644f96c4242819aaa9d839298229f4b4523816244f6f3eb5ce6a5f982067"} Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.071190 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.071267 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.338679 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.420498 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-catalog-content\") pod \"688964ad-c530-4b0f-adfa-cf41ec318dc8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.420633 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcglv\" (UniqueName: \"kubernetes.io/projected/688964ad-c530-4b0f-adfa-cf41ec318dc8-kube-api-access-tcglv\") pod \"688964ad-c530-4b0f-adfa-cf41ec318dc8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.420698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-utilities\") pod \"688964ad-c530-4b0f-adfa-cf41ec318dc8\" (UID: \"688964ad-c530-4b0f-adfa-cf41ec318dc8\") " Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.422363 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-utilities" (OuterVolumeSpecName: "utilities") pod "688964ad-c530-4b0f-adfa-cf41ec318dc8" (UID: "688964ad-c530-4b0f-adfa-cf41ec318dc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.427813 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688964ad-c530-4b0f-adfa-cf41ec318dc8-kube-api-access-tcglv" (OuterVolumeSpecName: "kube-api-access-tcglv") pod "688964ad-c530-4b0f-adfa-cf41ec318dc8" (UID: "688964ad-c530-4b0f-adfa-cf41ec318dc8"). InnerVolumeSpecName "kube-api-access-tcglv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.473983 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "688964ad-c530-4b0f-adfa-cf41ec318dc8" (UID: "688964ad-c530-4b0f-adfa-cf41ec318dc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.522590 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.522635 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcglv\" (UniqueName: \"kubernetes.io/projected/688964ad-c530-4b0f-adfa-cf41ec318dc8-kube-api-access-tcglv\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.522648 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688964ad-c530-4b0f-adfa-cf41ec318dc8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.585662 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-krlwt" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="registry-server" containerID="cri-o://278c6dab78edd9dce0f380821501a58df3d314871c9e0ee095ef6c82da2eb91a" gracePeriod=2 Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.586034 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpk8" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.586467 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpk8" event={"ID":"688964ad-c530-4b0f-adfa-cf41ec318dc8","Type":"ContainerDied","Data":"b20ef61a8f40641501a4410410a4a60ad05366e23add858cc0df8caea373cdb7"} Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.586516 4823 scope.go:117] "RemoveContainer" containerID="5584644f96c4242819aaa9d839298229f4b4523816244f6f3eb5ce6a5f982067" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.623936 4823 scope.go:117] "RemoveContainer" containerID="30397206a0e0ea2724bfd8753e1c7f74ed7138abdacbc49faa502d027e7aca9f" Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.629350 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzpk8"] Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.640487 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mzpk8"] Jan 21 17:31:45 crc kubenswrapper[4823]: I0121 17:31:45.645023 4823 scope.go:117] "RemoveContainer" containerID="d9d63ff7e9714b2f65780e97fd5fdd352bf855644aaf29f42161fe8f65778a67" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.609094 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerID="278c6dab78edd9dce0f380821501a58df3d314871c9e0ee095ef6c82da2eb91a" exitCode=0 Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.609534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerDied","Data":"278c6dab78edd9dce0f380821501a58df3d314871c9e0ee095ef6c82da2eb91a"} Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.813092 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m48r4"] Jan 21 17:31:46 crc kubenswrapper[4823]: E0121 17:31:46.813431 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="extract-utilities" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.813445 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="extract-utilities" Jan 21 17:31:46 crc kubenswrapper[4823]: E0121 17:31:46.813466 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="extract-content" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.813472 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="extract-content" Jan 21 17:31:46 crc kubenswrapper[4823]: E0121 17:31:46.813485 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="registry-server" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.813492 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="registry-server" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.813643 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" containerName="registry-server" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.814865 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.820489 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m48r4"] Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.945299 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-utilities\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.945439 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-catalog-content\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:46 crc kubenswrapper[4823]: I0121 17:31:46.945711 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8s2x\" (UniqueName: \"kubernetes.io/projected/ee5caa86-b875-4054-a787-36d06e7335f9-kube-api-access-g8s2x\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.047623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-catalog-content\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.047713 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8s2x\" (UniqueName: \"kubernetes.io/projected/ee5caa86-b875-4054-a787-36d06e7335f9-kube-api-access-g8s2x\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.047745 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-utilities\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.048213 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-catalog-content\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.048301 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-utilities\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.076151 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8s2x\" (UniqueName: \"kubernetes.io/projected/ee5caa86-b875-4054-a787-36d06e7335f9-kube-api-access-g8s2x\") pod \"redhat-marketplace-m48r4\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.141400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:47 crc kubenswrapper[4823]: I0121 17:31:47.352808 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688964ad-c530-4b0f-adfa-cf41ec318dc8" path="/var/lib/kubelet/pods/688964ad-c530-4b0f-adfa-cf41ec318dc8/volumes" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.410351 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.576536 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x94cv\" (UniqueName: \"kubernetes.io/projected/3c43b34b-ca14-400e-b6f9-d45c53e11767-kube-api-access-x94cv\") pod \"3c43b34b-ca14-400e-b6f9-d45c53e11767\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.576825 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-utilities\") pod \"3c43b34b-ca14-400e-b6f9-d45c53e11767\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.577604 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-catalog-content\") pod \"3c43b34b-ca14-400e-b6f9-d45c53e11767\" (UID: \"3c43b34b-ca14-400e-b6f9-d45c53e11767\") " Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.578244 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-utilities" (OuterVolumeSpecName: "utilities") pod "3c43b34b-ca14-400e-b6f9-d45c53e11767" (UID: "3c43b34b-ca14-400e-b6f9-d45c53e11767"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.587358 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c43b34b-ca14-400e-b6f9-d45c53e11767-kube-api-access-x94cv" (OuterVolumeSpecName: "kube-api-access-x94cv") pod "3c43b34b-ca14-400e-b6f9-d45c53e11767" (UID: "3c43b34b-ca14-400e-b6f9-d45c53e11767"). InnerVolumeSpecName "kube-api-access-x94cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.631284 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krlwt" event={"ID":"3c43b34b-ca14-400e-b6f9-d45c53e11767","Type":"ContainerDied","Data":"2efdf7b1999b6dcd489b018ed913722e5c3b1889b0487e6f97dd8bead2927201"} Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.631358 4823 scope.go:117] "RemoveContainer" containerID="278c6dab78edd9dce0f380821501a58df3d314871c9e0ee095ef6c82da2eb91a" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.631380 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krlwt" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.631900 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c43b34b-ca14-400e-b6f9-d45c53e11767" (UID: "3c43b34b-ca14-400e-b6f9-d45c53e11767"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.674000 4823 scope.go:117] "RemoveContainer" containerID="2d8882a4518c7de9bb76b8d87ec01bff5bb5ecaf5bbad7ff1c7a1ab2c46da28c" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.695410 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.696065 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c43b34b-ca14-400e-b6f9-d45c53e11767-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.696639 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x94cv\" (UniqueName: \"kubernetes.io/projected/3c43b34b-ca14-400e-b6f9-d45c53e11767-kube-api-access-x94cv\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.715010 4823 scope.go:117] "RemoveContainer" containerID="e3b4359fbe0998bc4d653ce0119a0eece516b9968f8c2108561ddccb65364db0" Jan 21 17:31:48 crc kubenswrapper[4823]: I0121 17:31:48.786193 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m48r4"] Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.024897 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krlwt"] Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.031201 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-krlwt"] Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.353729 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" path="/var/lib/kubelet/pods/3c43b34b-ca14-400e-b6f9-d45c53e11767/volumes" Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.641674 4823 generic.go:334] "Generic (PLEG): container finished" podID="ee5caa86-b875-4054-a787-36d06e7335f9" containerID="2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4" exitCode=0 Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.641796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m48r4" event={"ID":"ee5caa86-b875-4054-a787-36d06e7335f9","Type":"ContainerDied","Data":"2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4"} Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.641838 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m48r4" event={"ID":"ee5caa86-b875-4054-a787-36d06e7335f9","Type":"ContainerStarted","Data":"1a1a806bb638f610da07048a946d24777dd8885d70814c6171479c7a6ef1b9c7"} Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.645735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" event={"ID":"a8052883-2f70-4dbf-b81c-2bdc0752c41e","Type":"ContainerStarted","Data":"f41e66db839a0435d5980ebca9e5416065e692a813ed6865407bd7d07cdba93c"} Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.646484 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:31:49 crc kubenswrapper[4823]: I0121 17:31:49.692253 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" podStartSLOduration=2.430755196 podStartE2EDuration="8.692228943s" podCreationTimestamp="2026-01-21 17:31:41 +0000 UTC" firstStartedPulling="2026-01-21 17:31:42.413975714 +0000 UTC m=+903.340106574" lastFinishedPulling="2026-01-21 17:31:48.675449461 +0000 UTC m=+909.601580321" observedRunningTime="2026-01-21 17:31:49.687961997 +0000 UTC m=+910.614092897" watchObservedRunningTime="2026-01-21 17:31:49.692228943 +0000 UTC m=+910.618359823" Jan 21 17:31:51 crc kubenswrapper[4823]: I0121 17:31:51.666006 4823 generic.go:334] "Generic (PLEG): container finished" podID="ee5caa86-b875-4054-a787-36d06e7335f9" containerID="2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d" exitCode=0 Jan 21 17:31:51 crc kubenswrapper[4823]: I0121 17:31:51.666417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m48r4" event={"ID":"ee5caa86-b875-4054-a787-36d06e7335f9","Type":"ContainerDied","Data":"2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d"} Jan 21 17:31:52 crc kubenswrapper[4823]: I0121 17:31:52.676607 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m48r4" event={"ID":"ee5caa86-b875-4054-a787-36d06e7335f9","Type":"ContainerStarted","Data":"f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c"} Jan 21 17:31:52 crc kubenswrapper[4823]: I0121 17:31:52.701481 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m48r4" podStartSLOduration=4.26740304 podStartE2EDuration="6.701459701s" podCreationTimestamp="2026-01-21 17:31:46 +0000 UTC" firstStartedPulling="2026-01-21 17:31:49.643507937 +0000 UTC m=+910.569638807" lastFinishedPulling="2026-01-21 17:31:52.077564608 +0000 UTC m=+913.003695468" observedRunningTime="2026-01-21 17:31:52.697847461 +0000 UTC m=+913.623978341" watchObservedRunningTime="2026-01-21 17:31:52.701459701 +0000 UTC m=+913.627590561" Jan 21 17:31:57 crc kubenswrapper[4823]: I0121 17:31:57.143065 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:57 crc kubenswrapper[4823]: I0121 17:31:57.143736 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:57 crc kubenswrapper[4823]: I0121 17:31:57.193170 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:57 crc kubenswrapper[4823]: I0121 17:31:57.749165 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:31:57 crc kubenswrapper[4823]: I0121 17:31:57.794387 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m48r4"] Jan 21 17:31:59 crc kubenswrapper[4823]: I0121 17:31:59.722568 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m48r4" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="registry-server" containerID="cri-o://f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c" gracePeriod=2 Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.154149 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.271114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8s2x\" (UniqueName: \"kubernetes.io/projected/ee5caa86-b875-4054-a787-36d06e7335f9-kube-api-access-g8s2x\") pod \"ee5caa86-b875-4054-a787-36d06e7335f9\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.271158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-catalog-content\") pod \"ee5caa86-b875-4054-a787-36d06e7335f9\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.271286 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-utilities\") pod \"ee5caa86-b875-4054-a787-36d06e7335f9\" (UID: \"ee5caa86-b875-4054-a787-36d06e7335f9\") " Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.272337 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-utilities" (OuterVolumeSpecName: "utilities") pod "ee5caa86-b875-4054-a787-36d06e7335f9" (UID: "ee5caa86-b875-4054-a787-36d06e7335f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.280755 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5caa86-b875-4054-a787-36d06e7335f9-kube-api-access-g8s2x" (OuterVolumeSpecName: "kube-api-access-g8s2x") pod "ee5caa86-b875-4054-a787-36d06e7335f9" (UID: "ee5caa86-b875-4054-a787-36d06e7335f9"). InnerVolumeSpecName "kube-api-access-g8s2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.303598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee5caa86-b875-4054-a787-36d06e7335f9" (UID: "ee5caa86-b875-4054-a787-36d06e7335f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.372718 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.372962 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8s2x\" (UniqueName: \"kubernetes.io/projected/ee5caa86-b875-4054-a787-36d06e7335f9-kube-api-access-g8s2x\") on node \"crc\" DevicePath \"\"" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.372976 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5caa86-b875-4054-a787-36d06e7335f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.731451 4823 generic.go:334] "Generic (PLEG): container finished" podID="ee5caa86-b875-4054-a787-36d06e7335f9" containerID="f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c" exitCode=0 Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.731497 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m48r4" event={"ID":"ee5caa86-b875-4054-a787-36d06e7335f9","Type":"ContainerDied","Data":"f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c"} Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.731550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m48r4" event={"ID":"ee5caa86-b875-4054-a787-36d06e7335f9","Type":"ContainerDied","Data":"1a1a806bb638f610da07048a946d24777dd8885d70814c6171479c7a6ef1b9c7"} Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.731575 4823 scope.go:117] "RemoveContainer" containerID="f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.731577 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m48r4" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.752397 4823 scope.go:117] "RemoveContainer" containerID="2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.760138 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m48r4"] Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.765308 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m48r4"] Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.783494 4823 scope.go:117] "RemoveContainer" containerID="2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.800828 4823 scope.go:117] "RemoveContainer" containerID="f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c" Jan 21 17:32:00 crc kubenswrapper[4823]: E0121 17:32:00.801333 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c\": container with ID starting with f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c not found: ID does not exist" containerID="f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.801443 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c"} err="failed to get container status \"f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c\": rpc error: code = NotFound desc = could not find container \"f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c\": container with ID starting with f1feb11cca554f36a223f0959c7d294a03700f70e5f977fecfcd4133b71e7c0c not found: ID does not exist" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.801522 4823 scope.go:117] "RemoveContainer" containerID="2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d" Jan 21 17:32:00 crc kubenswrapper[4823]: E0121 17:32:00.802220 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d\": container with ID starting with 2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d not found: ID does not exist" containerID="2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.802250 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d"} err="failed to get container status \"2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d\": rpc error: code = NotFound desc = could not find container \"2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d\": container with ID starting with 2cb960417b9df16f9379992f4203f27e0e610f5b0f2bbe27fb5f867732475c2d not found: ID does not exist" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.802270 4823 scope.go:117] "RemoveContainer" containerID="2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4" Jan 21 17:32:00 crc kubenswrapper[4823]: E0121 17:32:00.802499 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4\": container with ID starting with 2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4 not found: ID does not exist" containerID="2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4" Jan 21 17:32:00 crc kubenswrapper[4823]: I0121 17:32:00.802590 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4"} err="failed to get container status \"2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4\": rpc error: code = NotFound desc = could not find container \"2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4\": container with ID starting with 2e3df7df8bd92f6dc47e96014831c1ba9b460a90c5c10ca4a8c535258fe2e8b4 not found: ID does not exist" Jan 21 17:32:01 crc kubenswrapper[4823]: I0121 17:32:01.359349 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" path="/var/lib/kubelet/pods/ee5caa86-b875-4054-a787-36d06e7335f9/volumes" Jan 21 17:32:02 crc kubenswrapper[4823]: I0121 17:32:02.114980 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65d788f684-rh8vw" Jan 21 17:32:15 crc kubenswrapper[4823]: I0121 17:32:15.070600 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:32:15 crc kubenswrapper[4823]: I0121 17:32:15.071319 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.848925 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm"] Jan 21 17:32:24 crc kubenswrapper[4823]: E0121 17:32:24.850339 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="extract-content" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850361 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="extract-content" Jan 21 17:32:24 crc kubenswrapper[4823]: E0121 17:32:24.850378 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="registry-server" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850387 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="registry-server" Jan 21 17:32:24 crc kubenswrapper[4823]: E0121 17:32:24.850409 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="extract-utilities" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850419 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="extract-utilities" Jan 21 17:32:24 crc kubenswrapper[4823]: E0121 17:32:24.850432 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="registry-server" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850442 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="registry-server" Jan 21 17:32:24 crc kubenswrapper[4823]: E0121 17:32:24.850456 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="extract-content" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850463 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="extract-content" Jan 21 17:32:24 crc kubenswrapper[4823]: E0121 17:32:24.850477 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="extract-utilities" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850485 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="extract-utilities" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850680 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5caa86-b875-4054-a787-36d06e7335f9" containerName="registry-server" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.850706 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c43b34b-ca14-400e-b6f9-d45c53e11767" containerName="registry-server" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.851485 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.854167 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.855825 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.856029 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mvsg9" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.859547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4wjpz" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.866595 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.873909 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.896256 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.897319 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.906036 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rx262" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.912047 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.915373 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.920821 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ftjqd" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.946083 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.947334 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.950136 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sps94\" (UniqueName: \"kubernetes.io/projected/1a2264de-b154-4669-aeae-fd1e71b29b0d-kube-api-access-sps94\") pod \"barbican-operator-controller-manager-7ddb5c749-r6txm\" (UID: \"1a2264de-b154-4669-aeae-fd1e71b29b0d\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.950193 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wrj\" (UniqueName: \"kubernetes.io/projected/41648873-3abc-47a7-8c4d-8c3a15bdf09e-kube-api-access-28wrj\") pod \"cinder-operator-controller-manager-9b68f5989-z8pp2\" (UID: \"41648873-3abc-47a7-8c4d-8c3a15bdf09e\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.954516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wbrjr" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.958715 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl"] Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.972795 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:32:24 crc kubenswrapper[4823]: I0121 17:32:24.977837 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpjwh" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.026200 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.059355 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6zv\" (UniqueName: \"kubernetes.io/projected/c05b436d-2d25-449f-a929-9424e4b6021f-kube-api-access-bs6zv\") pod \"horizon-operator-controller-manager-77d5c5b54f-5srsl\" (UID: \"c05b436d-2d25-449f-a929-9424e4b6021f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.059424 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97tpz\" (UniqueName: \"kubernetes.io/projected/1282a2fd-37f7-4fd4-9c38-69b9c87f2910-kube-api-access-97tpz\") pod \"designate-operator-controller-manager-9f958b845-pd8gs\" (UID: \"1282a2fd-37f7-4fd4-9c38-69b9c87f2910\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.059448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7p6\" (UniqueName: \"kubernetes.io/projected/4923d7a0-77b7-4d86-a6fc-fff0e9a81766-kube-api-access-jl7p6\") pod \"heat-operator-controller-manager-594c8c9d5d-hx6mm\" (UID: \"4923d7a0-77b7-4d86-a6fc-fff0e9a81766\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.059491 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/9fe37a8e-6b17-4aad-8787-142c28faac52-kube-api-access-dz55g\") pod \"glance-operator-controller-manager-c6994669c-2b4l6\" (UID: \"9fe37a8e-6b17-4aad-8787-142c28faac52\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.059581 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sps94\" (UniqueName: \"kubernetes.io/projected/1a2264de-b154-4669-aeae-fd1e71b29b0d-kube-api-access-sps94\") pod \"barbican-operator-controller-manager-7ddb5c749-r6txm\" (UID: \"1a2264de-b154-4669-aeae-fd1e71b29b0d\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.059615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wrj\" (UniqueName: \"kubernetes.io/projected/41648873-3abc-47a7-8c4d-8c3a15bdf09e-kube-api-access-28wrj\") pod \"cinder-operator-controller-manager-9b68f5989-z8pp2\" (UID: \"41648873-3abc-47a7-8c4d-8c3a15bdf09e\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.072932 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.103621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sps94\" (UniqueName: \"kubernetes.io/projected/1a2264de-b154-4669-aeae-fd1e71b29b0d-kube-api-access-sps94\") pod \"barbican-operator-controller-manager-7ddb5c749-r6txm\" (UID: \"1a2264de-b154-4669-aeae-fd1e71b29b0d\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.103933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wrj\" (UniqueName: \"kubernetes.io/projected/41648873-3abc-47a7-8c4d-8c3a15bdf09e-kube-api-access-28wrj\") pod \"cinder-operator-controller-manager-9b68f5989-z8pp2\" (UID: \"41648873-3abc-47a7-8c4d-8c3a15bdf09e\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.143003 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.159432 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.160793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97tpz\" (UniqueName: \"kubernetes.io/projected/1282a2fd-37f7-4fd4-9c38-69b9c87f2910-kube-api-access-97tpz\") pod \"designate-operator-controller-manager-9f958b845-pd8gs\" (UID: \"1282a2fd-37f7-4fd4-9c38-69b9c87f2910\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.160835 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7p6\" (UniqueName: \"kubernetes.io/projected/4923d7a0-77b7-4d86-a6fc-fff0e9a81766-kube-api-access-jl7p6\") pod \"heat-operator-controller-manager-594c8c9d5d-hx6mm\" (UID: \"4923d7a0-77b7-4d86-a6fc-fff0e9a81766\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.160901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/9fe37a8e-6b17-4aad-8787-142c28faac52-kube-api-access-dz55g\") pod \"glance-operator-controller-manager-c6994669c-2b4l6\" (UID: \"9fe37a8e-6b17-4aad-8787-142c28faac52\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.161012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6zv\" (UniqueName: \"kubernetes.io/projected/c05b436d-2d25-449f-a929-9424e4b6021f-kube-api-access-bs6zv\") pod \"horizon-operator-controller-manager-77d5c5b54f-5srsl\" (UID: \"c05b436d-2d25-449f-a929-9424e4b6021f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.180885 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.183136 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.184742 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.186209 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.196901 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hnfx5" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.200666 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.206676 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7p6\" (UniqueName: \"kubernetes.io/projected/4923d7a0-77b7-4d86-a6fc-fff0e9a81766-kube-api-access-jl7p6\") pod \"heat-operator-controller-manager-594c8c9d5d-hx6mm\" (UID: \"4923d7a0-77b7-4d86-a6fc-fff0e9a81766\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.211757 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6zv\" (UniqueName: \"kubernetes.io/projected/c05b436d-2d25-449f-a929-9424e4b6021f-kube-api-access-bs6zv\") pod \"horizon-operator-controller-manager-77d5c5b54f-5srsl\" (UID: \"c05b436d-2d25-449f-a929-9424e4b6021f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.212282 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.213557 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.218442 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/9fe37a8e-6b17-4aad-8787-142c28faac52-kube-api-access-dz55g\") pod \"glance-operator-controller-manager-c6994669c-2b4l6\" (UID: \"9fe37a8e-6b17-4aad-8787-142c28faac52\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.219213 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5g5nr" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.236330 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.242059 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97tpz\" (UniqueName: \"kubernetes.io/projected/1282a2fd-37f7-4fd4-9c38-69b9c87f2910-kube-api-access-97tpz\") pod \"designate-operator-controller-manager-9f958b845-pd8gs\" (UID: \"1282a2fd-37f7-4fd4-9c38-69b9c87f2910\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.247931 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.266528 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwh2v\" (UniqueName: \"kubernetes.io/projected/b57ac152-55e4-445b-be02-c74b9fe96905-kube-api-access-bwh2v\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.266597 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.292992 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.317679 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.319191 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.330478 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2vbs9" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.331397 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.343822 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.344499 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.345067 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.348214 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-k7f7m" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.368175 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwh2v\" (UniqueName: \"kubernetes.io/projected/b57ac152-55e4-445b-be02-c74b9fe96905-kube-api-access-bwh2v\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.368214 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.368242 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqt5\" (UniqueName: \"kubernetes.io/projected/57081a15-e11e-4d49-b516-3f8ccabea011-kube-api-access-qrqt5\") pod \"ironic-operator-controller-manager-78757b4889-8b4ft\" (UID: \"57081a15-e11e-4d49-b516-3f8ccabea011\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:32:25 crc kubenswrapper[4823]: E0121 17:32:25.368815 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:25 crc kubenswrapper[4823]: E0121 17:32:25.369214 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert podName:b57ac152-55e4-445b-be02-c74b9fe96905 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:25.869191979 +0000 UTC m=+946.795322839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert") pod "infra-operator-controller-manager-77c48c7859-5lr2z" (UID: "b57ac152-55e4-445b-be02-c74b9fe96905") : secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.389658 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.392366 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.392553 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.392647 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.395577 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gmhtz" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.395707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwh2v\" (UniqueName: \"kubernetes.io/projected/b57ac152-55e4-445b-be02-c74b9fe96905-kube-api-access-bwh2v\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.402364 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.443123 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.444311 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.454599 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5kzf2" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.455770 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.464516 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.472529 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z796\" (UniqueName: \"kubernetes.io/projected/49bc570b-b84d-48a3-b322-95b9ece80f26-kube-api-access-7z796\") pod \"mariadb-operator-controller-manager-c87fff755-w8s6p\" (UID: \"49bc570b-b84d-48a3-b322-95b9ece80f26\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.472637 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrqt5\" (UniqueName: \"kubernetes.io/projected/57081a15-e11e-4d49-b516-3f8ccabea011-kube-api-access-qrqt5\") pod \"ironic-operator-controller-manager-78757b4889-8b4ft\" (UID: \"57081a15-e11e-4d49-b516-3f8ccabea011\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.472878 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndmvr\" (UniqueName: \"kubernetes.io/projected/85830ef7-1db0-47bf-b03f-0720fceda12b-kube-api-access-ndmvr\") pod \"keystone-operator-controller-manager-767fdc4f47-thq22\" (UID: \"85830ef7-1db0-47bf-b03f-0720fceda12b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.472975 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7lr2p" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.473000 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtnt\" (UniqueName: \"kubernetes.io/projected/94b91e49-7b7f-4e7a-bcdf-31d847d8c517-kube-api-access-kjtnt\") pod \"manila-operator-controller-manager-864f6b75bf-56n9m\" (UID: \"94b91e49-7b7f-4e7a-bcdf-31d847d8c517\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.518508 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrqt5\" (UniqueName: \"kubernetes.io/projected/57081a15-e11e-4d49-b516-3f8ccabea011-kube-api-access-qrqt5\") pod \"ironic-operator-controller-manager-78757b4889-8b4ft\" (UID: \"57081a15-e11e-4d49-b516-3f8ccabea011\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.518790 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.545356 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.558575 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.568511 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.569624 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.573721 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b6xbf" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.573946 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.575177 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z796\" (UniqueName: \"kubernetes.io/projected/49bc570b-b84d-48a3-b322-95b9ece80f26-kube-api-access-7z796\") pod \"mariadb-operator-controller-manager-c87fff755-w8s6p\" (UID: \"49bc570b-b84d-48a3-b322-95b9ece80f26\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.575242 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p62w\" (UniqueName: \"kubernetes.io/projected/ae942c2a-d0df-4bf2-8e76-ca95474ad50f-kube-api-access-8p62w\") pod \"nova-operator-controller-manager-65849867d6-rxvvc\" (UID: \"ae942c2a-d0df-4bf2-8e76-ca95474ad50f\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.575270 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xcs\" (UniqueName: \"kubernetes.io/projected/5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128-kube-api-access-d2xcs\") pod \"neutron-operator-controller-manager-cb4666565-pqfhj\" (UID: \"5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.575324 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndmvr\" (UniqueName: \"kubernetes.io/projected/85830ef7-1db0-47bf-b03f-0720fceda12b-kube-api-access-ndmvr\") pod \"keystone-operator-controller-manager-767fdc4f47-thq22\" (UID: \"85830ef7-1db0-47bf-b03f-0720fceda12b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.575371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtnt\" (UniqueName: \"kubernetes.io/projected/94b91e49-7b7f-4e7a-bcdf-31d847d8c517-kube-api-access-kjtnt\") pod \"manila-operator-controller-manager-864f6b75bf-56n9m\" (UID: \"94b91e49-7b7f-4e7a-bcdf-31d847d8c517\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.581595 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.582683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.591670 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.592748 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.603120 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5v79k" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.604690 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6h96x" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.604890 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.607611 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z796\" (UniqueName: \"kubernetes.io/projected/49bc570b-b84d-48a3-b322-95b9ece80f26-kube-api-access-7z796\") pod \"mariadb-operator-controller-manager-c87fff755-w8s6p\" (UID: \"49bc570b-b84d-48a3-b322-95b9ece80f26\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.609806 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndmvr\" (UniqueName: \"kubernetes.io/projected/85830ef7-1db0-47bf-b03f-0720fceda12b-kube-api-access-ndmvr\") pod \"keystone-operator-controller-manager-767fdc4f47-thq22\" (UID: \"85830ef7-1db0-47bf-b03f-0720fceda12b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.612655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtnt\" (UniqueName: \"kubernetes.io/projected/94b91e49-7b7f-4e7a-bcdf-31d847d8c517-kube-api-access-kjtnt\") pod \"manila-operator-controller-manager-864f6b75bf-56n9m\" (UID: \"94b91e49-7b7f-4e7a-bcdf-31d847d8c517\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.636717 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.643754 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.645102 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.651156 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.652310 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.654586 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-88b7k" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.665160 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.670376 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.685309 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kfjg\" (UniqueName: \"kubernetes.io/projected/6571a611-d208-419b-9304-5a6d6b8c1d1b-kube-api-access-8kfjg\") pod \"octavia-operator-controller-manager-7fc9b76cf6-cr4d5\" (UID: \"6571a611-d208-419b-9304-5a6d6b8c1d1b\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.685383 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p62w\" (UniqueName: \"kubernetes.io/projected/ae942c2a-d0df-4bf2-8e76-ca95474ad50f-kube-api-access-8p62w\") pod \"nova-operator-controller-manager-65849867d6-rxvvc\" (UID: \"ae942c2a-d0df-4bf2-8e76-ca95474ad50f\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.685411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xcs\" (UniqueName: \"kubernetes.io/projected/5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128-kube-api-access-d2xcs\") pod \"neutron-operator-controller-manager-cb4666565-pqfhj\" (UID: \"5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.685461 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.685574 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkfs\" (UniqueName: \"kubernetes.io/projected/b7356c47-15be-48c6-a78e-5389b077d2c6-kube-api-access-7dkfs\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.685635 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbg4\" (UniqueName: \"kubernetes.io/projected/9819367b-7d70-48cf-bdde-0c0e2ccf5fbd-kube-api-access-zlbg4\") pod \"ovn-operator-controller-manager-55db956ddc-9qqbg\" (UID: \"9819367b-7d70-48cf-bdde-0c0e2ccf5fbd\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.687793 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.688736 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.711146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.712417 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p62w\" (UniqueName: \"kubernetes.io/projected/ae942c2a-d0df-4bf2-8e76-ca95474ad50f-kube-api-access-8p62w\") pod \"nova-operator-controller-manager-65849867d6-rxvvc\" (UID: \"ae942c2a-d0df-4bf2-8e76-ca95474ad50f\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.714963 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-gg9wp" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.718915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xcs\" (UniqueName: \"kubernetes.io/projected/5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128-kube-api-access-d2xcs\") pod \"neutron-operator-controller-manager-cb4666565-pqfhj\" (UID: \"5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.737394 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.767994 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.781797 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.785324 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.789174 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-w4kjc" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.789444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqfb\" (UniqueName: \"kubernetes.io/projected/2a4a8662-2615-4cf3-950d-2602ec921aaf-kube-api-access-6bqfb\") pod \"placement-operator-controller-manager-686df47fcb-lds9d\" (UID: \"2a4a8662-2615-4cf3-950d-2602ec921aaf\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.789585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kfjg\" (UniqueName: \"kubernetes.io/projected/6571a611-d208-419b-9304-5a6d6b8c1d1b-kube-api-access-8kfjg\") pod \"octavia-operator-controller-manager-7fc9b76cf6-cr4d5\" (UID: \"6571a611-d208-419b-9304-5a6d6b8c1d1b\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.789736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.790059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b24g\" (UniqueName: \"kubernetes.io/projected/33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf-kube-api-access-6b24g\") pod \"swift-operator-controller-manager-85dd56d4cc-52tkk\" (UID: \"33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:32:25 crc kubenswrapper[4823]: E0121 17:32:25.790148 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.790168 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkfs\" (UniqueName: \"kubernetes.io/projected/b7356c47-15be-48c6-a78e-5389b077d2c6-kube-api-access-7dkfs\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.790206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbg4\" (UniqueName: \"kubernetes.io/projected/9819367b-7d70-48cf-bdde-0c0e2ccf5fbd-kube-api-access-zlbg4\") pod \"ovn-operator-controller-manager-55db956ddc-9qqbg\" (UID: \"9819367b-7d70-48cf-bdde-0c0e2ccf5fbd\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:32:25 crc kubenswrapper[4823]: E0121 17:32:25.790248 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert podName:b7356c47-15be-48c6-a78e-5389b077d2c6 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:26.290216944 +0000 UTC m=+947.216347804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" (UID: "b7356c47-15be-48c6-a78e-5389b077d2c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.822142 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.824046 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.834940 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbg4\" (UniqueName: \"kubernetes.io/projected/9819367b-7d70-48cf-bdde-0c0e2ccf5fbd-kube-api-access-zlbg4\") pod \"ovn-operator-controller-manager-55db956ddc-9qqbg\" (UID: \"9819367b-7d70-48cf-bdde-0c0e2ccf5fbd\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.835432 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkfs\" (UniqueName: \"kubernetes.io/projected/b7356c47-15be-48c6-a78e-5389b077d2c6-kube-api-access-7dkfs\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.836670 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kfjg\" (UniqueName: \"kubernetes.io/projected/6571a611-d208-419b-9304-5a6d6b8c1d1b-kube-api-access-8kfjg\") pod \"octavia-operator-controller-manager-7fc9b76cf6-cr4d5\" (UID: \"6571a611-d208-419b-9304-5a6d6b8c1d1b\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.843379 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.849836 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.857723 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9tvsx" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.859567 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.860597 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.894457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b24g\" (UniqueName: \"kubernetes.io/projected/33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf-kube-api-access-6b24g\") pod \"swift-operator-controller-manager-85dd56d4cc-52tkk\" (UID: \"33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.894518 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclxv\" (UniqueName: \"kubernetes.io/projected/3b0d5663-0000-4ddf-bd72-893997a79681-kube-api-access-kclxv\") pod \"telemetry-operator-controller-manager-5f8f495fcf-b4h92\" (UID: \"3b0d5663-0000-4ddf-bd72-893997a79681\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.894564 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6btz\" (UniqueName: \"kubernetes.io/projected/b6b63b33-5320-4cc6-a4b8-c359e19cdfef-kube-api-access-m6btz\") pod \"test-operator-controller-manager-7cd8bc9dbb-msnsz\" (UID: \"b6b63b33-5320-4cc6-a4b8-c359e19cdfef\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.894595 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqfb\" (UniqueName: \"kubernetes.io/projected/2a4a8662-2615-4cf3-950d-2602ec921aaf-kube-api-access-6bqfb\") pod \"placement-operator-controller-manager-686df47fcb-lds9d\" (UID: \"2a4a8662-2615-4cf3-950d-2602ec921aaf\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.894623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:25 crc kubenswrapper[4823]: E0121 17:32:25.894755 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:25 crc kubenswrapper[4823]: E0121 17:32:25.894809 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert podName:b57ac152-55e4-445b-be02-c74b9fe96905 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:26.894790691 +0000 UTC m=+947.820921551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert") pod "infra-operator-controller-manager-77c48c7859-5lr2z" (UID: "b57ac152-55e4-445b-be02-c74b9fe96905") : secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.925456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b24g\" (UniqueName: \"kubernetes.io/projected/33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf-kube-api-access-6b24g\") pod \"swift-operator-controller-manager-85dd56d4cc-52tkk\" (UID: \"33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.929382 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqfb\" (UniqueName: \"kubernetes.io/projected/2a4a8662-2615-4cf3-950d-2602ec921aaf-kube-api-access-6bqfb\") pod \"placement-operator-controller-manager-686df47fcb-lds9d\" (UID: \"2a4a8662-2615-4cf3-950d-2602ec921aaf\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.930729 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.932174 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.935200 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6n9h6" Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.952196 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.985743 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn"] Jan 21 17:32:25 crc kubenswrapper[4823]: I0121 17:32:25.986956 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:25.992193 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:25.992541 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zml9s" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:25.992659 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:25.996675 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclxv\" (UniqueName: \"kubernetes.io/projected/3b0d5663-0000-4ddf-bd72-893997a79681-kube-api-access-kclxv\") pod \"telemetry-operator-controller-manager-5f8f495fcf-b4h92\" (UID: \"3b0d5663-0000-4ddf-bd72-893997a79681\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:25.996715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6btz\" (UniqueName: \"kubernetes.io/projected/b6b63b33-5320-4cc6-a4b8-c359e19cdfef-kube-api-access-m6btz\") pod \"test-operator-controller-manager-7cd8bc9dbb-msnsz\" (UID: \"b6b63b33-5320-4cc6-a4b8-c359e19cdfef\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:25.996752 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tvf\" (UniqueName: \"kubernetes.io/projected/29b92151-85da-4df7-a4f1-c8fb7e08a4cf-kube-api-access-l7tvf\") pod \"watcher-operator-controller-manager-6cdc5b758-zbxpz\" (UID: \"29b92151-85da-4df7-a4f1-c8fb7e08a4cf\") " pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.008092 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.020962 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.024526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclxv\" (UniqueName: \"kubernetes.io/projected/3b0d5663-0000-4ddf-bd72-893997a79681-kube-api-access-kclxv\") pod \"telemetry-operator-controller-manager-5f8f495fcf-b4h92\" (UID: \"3b0d5663-0000-4ddf-bd72-893997a79681\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.032292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6btz\" (UniqueName: \"kubernetes.io/projected/b6b63b33-5320-4cc6-a4b8-c359e19cdfef-kube-api-access-m6btz\") pod \"test-operator-controller-manager-7cd8bc9dbb-msnsz\" (UID: \"b6b63b33-5320-4cc6-a4b8-c359e19cdfef\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.034619 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.038493 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.040351 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mz6d5" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.041413 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.043938 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.063434 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.066713 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.092397 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.099317 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7tvf\" (UniqueName: \"kubernetes.io/projected/29b92151-85da-4df7-a4f1-c8fb7e08a4cf-kube-api-access-l7tvf\") pod \"watcher-operator-controller-manager-6cdc5b758-zbxpz\" (UID: \"29b92151-85da-4df7-a4f1-c8fb7e08a4cf\") " pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.099408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdk7b\" (UniqueName: \"kubernetes.io/projected/2dcb7191-3fd3-435f-bf67-713b3953c63f-kube-api-access-jdk7b\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.099475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.099504 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhmb\" (UniqueName: \"kubernetes.io/projected/0d36d6af-75ff-4f6e-88aa-154e71609284-kube-api-access-4fhmb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-h462s\" (UID: \"0d36d6af-75ff-4f6e-88aa-154e71609284\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.099527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.131628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7tvf\" (UniqueName: \"kubernetes.io/projected/29b92151-85da-4df7-a4f1-c8fb7e08a4cf-kube-api-access-l7tvf\") pod \"watcher-operator-controller-manager-6cdc5b758-zbxpz\" (UID: \"29b92151-85da-4df7-a4f1-c8fb7e08a4cf\") " pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.163610 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.171201 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.180481 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.184803 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.202864 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdk7b\" (UniqueName: \"kubernetes.io/projected/2dcb7191-3fd3-435f-bf67-713b3953c63f-kube-api-access-jdk7b\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.202955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.202987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhmb\" (UniqueName: \"kubernetes.io/projected/0d36d6af-75ff-4f6e-88aa-154e71609284-kube-api-access-4fhmb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-h462s\" (UID: \"0d36d6af-75ff-4f6e-88aa-154e71609284\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.203012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.203172 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.203223 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:26.70320861 +0000 UTC m=+947.629339470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.203586 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.203627 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:26.703608929 +0000 UTC m=+947.629739789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "metrics-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.226320 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.229536 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdk7b\" (UniqueName: \"kubernetes.io/projected/2dcb7191-3fd3-435f-bf67-713b3953c63f-kube-api-access-jdk7b\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.245833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhmb\" (UniqueName: \"kubernetes.io/projected/0d36d6af-75ff-4f6e-88aa-154e71609284-kube-api-access-4fhmb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-h462s\" (UID: \"0d36d6af-75ff-4f6e-88aa-154e71609284\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.314521 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.314614 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert podName:b7356c47-15be-48c6-a78e-5389b077d2c6 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:27.314591835 +0000 UTC m=+948.240722695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" (UID: "b7356c47-15be-48c6-a78e-5389b077d2c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.314379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.335417 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.394293 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.437562 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.446884 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm"] Jan 21 17:32:26 crc kubenswrapper[4823]: W0121 17:32:26.482871 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4923d7a0_77b7_4d86_a6fc_fff0e9a81766.slice/crio-df2a9dcaaa65504fa0fb2c6a50da12fd29b17c2c90773370b36166897ea7146a WatchSource:0}: Error finding container df2a9dcaaa65504fa0fb2c6a50da12fd29b17c2c90773370b36166897ea7146a: Status 404 returned error can't find the container with id df2a9dcaaa65504fa0fb2c6a50da12fd29b17c2c90773370b36166897ea7146a Jan 21 17:32:26 crc kubenswrapper[4823]: W0121 17:32:26.488976 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc05b436d_2d25_449f_a929_9424e4b6021f.slice/crio-ed8f39e69b549f21dd910b0f07046c63e571420ab1ee53210c2840652485353a WatchSource:0}: Error finding container ed8f39e69b549f21dd910b0f07046c63e571420ab1ee53210c2840652485353a: Status 404 returned error can't find the container with id ed8f39e69b549f21dd910b0f07046c63e571420ab1ee53210c2840652485353a Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.732470 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.732610 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.732781 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.732838 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:27.73281941 +0000 UTC m=+948.658950270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "metrics-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.733521 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.733643 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:27.733609 +0000 UTC m=+948.659740030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.771406 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.796802 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.939906 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.950098 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft"] Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.950260 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: E0121 17:32:26.951107 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert podName:b57ac152-55e4-445b-be02-c74b9fe96905 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:28.951078189 +0000 UTC m=+949.877209049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert") pod "infra-operator-controller-manager-77c48c7859-5lr2z" (UID: "b57ac152-55e4-445b-be02-c74b9fe96905") : secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.970031 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.975622 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m"] Jan 21 17:32:26 crc kubenswrapper[4823]: I0121 17:32:26.997178 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" event={"ID":"41648873-3abc-47a7-8c4d-8c3a15bdf09e","Type":"ContainerStarted","Data":"9b3a2de6257cfe1b264d5c9493417638e52af5c6c89fb40465c2b20bb00a38c1"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.003010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" event={"ID":"9fe37a8e-6b17-4aad-8787-142c28faac52","Type":"ContainerStarted","Data":"04c85264f0137f340b3670c6dfda922dec344a40a85ab87d2e22a3cdaa6ee1e4"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.011170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" event={"ID":"57081a15-e11e-4d49-b516-3f8ccabea011","Type":"ContainerStarted","Data":"8282b5dfd1bb318cac89c023d9d9d035e44d55d834a95133f497bc38cab6a5a3"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.012755 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" event={"ID":"4923d7a0-77b7-4d86-a6fc-fff0e9a81766","Type":"ContainerStarted","Data":"df2a9dcaaa65504fa0fb2c6a50da12fd29b17c2c90773370b36166897ea7146a"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.018834 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" event={"ID":"1a2264de-b154-4669-aeae-fd1e71b29b0d","Type":"ContainerStarted","Data":"9013ba1ed09bcdfe9e8c6628f807262d7f2cb7701e7adf3cfe26ce7045da4c48"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.020327 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" event={"ID":"85830ef7-1db0-47bf-b03f-0720fceda12b","Type":"ContainerStarted","Data":"1e6b472829b418528879fe8770748b1a516c5c3f9f27b9fe055adf2a2c382330"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.022903 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" event={"ID":"1282a2fd-37f7-4fd4-9c38-69b9c87f2910","Type":"ContainerStarted","Data":"9a0dc7fe3d83adf969737a8c36df12c82ff4d3685e6cffc30e3ff51a2ab6b635"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.024649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" event={"ID":"c05b436d-2d25-449f-a929-9424e4b6021f","Type":"ContainerStarted","Data":"ed8f39e69b549f21dd910b0f07046c63e571420ab1ee53210c2840652485353a"} Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.165423 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.206204 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.233376 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5"] Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.263232 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cc4aa26_0ff4_4f18_b6e1_e2fdcea53128.slice/crio-3a7b8ba4f58044108513e2a9b74886c5df376a9c529e323264435269bbb43f2a WatchSource:0}: Error finding container 3a7b8ba4f58044108513e2a9b74886c5df376a9c529e323264435269bbb43f2a: Status 404 returned error can't find the container with id 3a7b8ba4f58044108513e2a9b74886c5df376a9c529e323264435269bbb43f2a Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.287009 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9819367b_7d70_48cf_bdde_0c0e2ccf5fbd.slice/crio-14c4d4f210ed398d113f89020e6292ca336570207a96de79c704c7e3a0460820 WatchSource:0}: Error finding container 14c4d4f210ed398d113f89020e6292ca336570207a96de79c704c7e3a0460820: Status 404 returned error can't find the container with id 14c4d4f210ed398d113f89020e6292ca336570207a96de79c704c7e3a0460820 Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.360674 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.371525 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.371690 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.371743 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert podName:b7356c47-15be-48c6-a78e-5389b077d2c6 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:29.371728335 +0000 UTC m=+950.297859195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" (UID: "b7356c47-15be-48c6-a78e-5389b077d2c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.402970 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d"] Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.410110 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a4a8662_2615_4cf3_950d_2602ec921aaf.slice/crio-1887c1b5d726f525aacb5ec8d95c7754642a90247869253b343a30baa4ed6107 WatchSource:0}: Error finding container 1887c1b5d726f525aacb5ec8d95c7754642a90247869253b343a30baa4ed6107: Status 404 returned error can't find the container with id 1887c1b5d726f525aacb5ec8d95c7754642a90247869253b343a30baa4ed6107 Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.569002 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.581021 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.602316 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.613757 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz"] Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.633435 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b63b33_5320_4cc6_a4b8_c359e19cdfef.slice/crio-6d666414ac69e7388b94298a25a43463b909b54c1f2071baef4a6ee1c10a717b WatchSource:0}: Error finding container 6d666414ac69e7388b94298a25a43463b909b54c1f2071baef4a6ee1c10a717b: Status 404 returned error can't find the container with id 6d666414ac69e7388b94298a25a43463b909b54c1f2071baef4a6ee1c10a717b Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.652987 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b0d5663_0000_4ddf_bd72_893997a79681.slice/crio-b985cf3e02d8629a580bf5f7f4ac6ee51129b66c05d816c8e36fef5ea5e98fe3 WatchSource:0}: Error finding container b985cf3e02d8629a580bf5f7f4ac6ee51129b66c05d816c8e36fef5ea5e98fe3: Status 404 returned error can't find the container with id b985cf3e02d8629a580bf5f7f4ac6ee51129b66c05d816c8e36fef5ea5e98fe3 Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.667601 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kclxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-b4h92_openstack-operators(3b0d5663-0000-4ddf-bd72-893997a79681): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.668777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" podUID="3b0d5663-0000-4ddf-bd72-893997a79681" Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.671793 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33b3d08f_7b7d_4fa3_94e1_5391a80d6aaf.slice/crio-c4c3b0ad96ad5bfb117b82f31c72e8038f5ca8b12dd8ceffb3774324c8d235a5 WatchSource:0}: Error finding container c4c3b0ad96ad5bfb117b82f31c72e8038f5ca8b12dd8ceffb3774324c8d235a5: Status 404 returned error can't find the container with id c4c3b0ad96ad5bfb117b82f31c72e8038f5ca8b12dd8ceffb3774324c8d235a5 Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.678959 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6b24g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-52tkk_openstack-operators(33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.680148 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" podUID="33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf" Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.750736 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s"] Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.778146 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:27 crc kubenswrapper[4823]: I0121 17:32:27.778235 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.778473 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.778545 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:29.778525668 +0000 UTC m=+950.704656538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "webhook-server-cert" not found Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.778999 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 17:32:27 crc kubenswrapper[4823]: E0121 17:32:27.779039 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:29.77902803 +0000 UTC m=+950.705158900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "metrics-server-cert" not found Jan 21 17:32:27 crc kubenswrapper[4823]: W0121 17:32:27.786144 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d36d6af_75ff_4f6e_88aa_154e71609284.slice/crio-b6e3919ffc4e08ae4a6e6e47502fdc31feb994e90c99c5e4db24e3dd06cbd8a3 WatchSource:0}: Error finding container b6e3919ffc4e08ae4a6e6e47502fdc31feb994e90c99c5e4db24e3dd06cbd8a3: Status 404 returned error can't find the container with id b6e3919ffc4e08ae4a6e6e47502fdc31feb994e90c99c5e4db24e3dd06cbd8a3 Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.079776 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" event={"ID":"94b91e49-7b7f-4e7a-bcdf-31d847d8c517","Type":"ContainerStarted","Data":"a15fea04013847f17a66006c105bbda5f2051c3fc6d724bf31f0d3a99b098423"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.083032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" event={"ID":"0d36d6af-75ff-4f6e-88aa-154e71609284","Type":"ContainerStarted","Data":"b6e3919ffc4e08ae4a6e6e47502fdc31feb994e90c99c5e4db24e3dd06cbd8a3"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.092545 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" event={"ID":"2a4a8662-2615-4cf3-950d-2602ec921aaf","Type":"ContainerStarted","Data":"1887c1b5d726f525aacb5ec8d95c7754642a90247869253b343a30baa4ed6107"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.107540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" event={"ID":"b6b63b33-5320-4cc6-a4b8-c359e19cdfef","Type":"ContainerStarted","Data":"6d666414ac69e7388b94298a25a43463b909b54c1f2071baef4a6ee1c10a717b"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.108581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" event={"ID":"29b92151-85da-4df7-a4f1-c8fb7e08a4cf","Type":"ContainerStarted","Data":"3ad205efc2444608039a87ce12bc2924ec036d05235d58c3606f04e23fff68d8"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.109496 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" event={"ID":"49bc570b-b84d-48a3-b322-95b9ece80f26","Type":"ContainerStarted","Data":"8c187e18a3584d9f5e4f7fd447b64c46d313033842ef6a1cbf1966bcc9f29e78"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.143842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" event={"ID":"3b0d5663-0000-4ddf-bd72-893997a79681","Type":"ContainerStarted","Data":"b985cf3e02d8629a580bf5f7f4ac6ee51129b66c05d816c8e36fef5ea5e98fe3"} Jan 21 17:32:28 crc kubenswrapper[4823]: E0121 17:32:28.147241 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" podUID="3b0d5663-0000-4ddf-bd72-893997a79681" Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.149753 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" event={"ID":"33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf","Type":"ContainerStarted","Data":"c4c3b0ad96ad5bfb117b82f31c72e8038f5ca8b12dd8ceffb3774324c8d235a5"} Jan 21 17:32:28 crc kubenswrapper[4823]: E0121 17:32:28.152294 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" podUID="33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf" Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.154431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" event={"ID":"6571a611-d208-419b-9304-5a6d6b8c1d1b","Type":"ContainerStarted","Data":"d793a5f459a95340c5ec42953163ecc89e1daeacf7c0699fc9215ce3ade78d28"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.162269 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" event={"ID":"9819367b-7d70-48cf-bdde-0c0e2ccf5fbd","Type":"ContainerStarted","Data":"14c4d4f210ed398d113f89020e6292ca336570207a96de79c704c7e3a0460820"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.173296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" event={"ID":"5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128","Type":"ContainerStarted","Data":"3a7b8ba4f58044108513e2a9b74886c5df376a9c529e323264435269bbb43f2a"} Jan 21 17:32:28 crc kubenswrapper[4823]: I0121 17:32:28.211238 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" event={"ID":"ae942c2a-d0df-4bf2-8e76-ca95474ad50f","Type":"ContainerStarted","Data":"658a61b64c7916918d2fd30a161b1d2c7befba6e682f3329089476812937dccf"} Jan 21 17:32:29 crc kubenswrapper[4823]: I0121 17:32:29.031279 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.031551 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.031727 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert podName:b57ac152-55e4-445b-be02-c74b9fe96905 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:33.031643626 +0000 UTC m=+953.957774486 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert") pod "infra-operator-controller-manager-77c48c7859-5lr2z" (UID: "b57ac152-55e4-445b-be02-c74b9fe96905") : secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.224691 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" podUID="3b0d5663-0000-4ddf-bd72-893997a79681" Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.230705 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" podUID="33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf" Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.443986 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.444394 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert podName:b7356c47-15be-48c6-a78e-5389b077d2c6 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:33.444343195 +0000 UTC m=+954.370474055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" (UID: "b7356c47-15be-48c6-a78e-5389b077d2c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: I0121 17:32:29.443547 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:29 crc kubenswrapper[4823]: I0121 17:32:29.853433 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:29 crc kubenswrapper[4823]: I0121 17:32:29.853529 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.853592 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.853681 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:33.853662799 +0000 UTC m=+954.779793659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "metrics-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.853794 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 17:32:29 crc kubenswrapper[4823]: E0121 17:32:29.853948 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:33.853843974 +0000 UTC m=+954.779975004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "webhook-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: I0121 17:32:33.121786 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.122138 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.122583 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert podName:b57ac152-55e4-445b-be02-c74b9fe96905 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:41.122556241 +0000 UTC m=+962.048687121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert") pod "infra-operator-controller-manager-77c48c7859-5lr2z" (UID: "b57ac152-55e4-445b-be02-c74b9fe96905") : secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: I0121 17:32:33.527824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.528023 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.528084 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert podName:b7356c47-15be-48c6-a78e-5389b077d2c6 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:41.528066481 +0000 UTC m=+962.454197341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" (UID: "b7356c47-15be-48c6-a78e-5389b077d2c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: I0121 17:32:33.934681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:33 crc kubenswrapper[4823]: I0121 17:32:33.934758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.934924 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.935065 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:41.935042629 +0000 UTC m=+962.861173489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "metrics-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.934951 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 17:32:33 crc kubenswrapper[4823]: E0121 17:32:33.935151 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:41.935132471 +0000 UTC m=+962.861263381 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "webhook-server-cert" not found Jan 21 17:32:40 crc kubenswrapper[4823]: E0121 17:32:40.622110 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 17:32:40 crc kubenswrapper[4823]: E0121 17:32:40.622913 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2xcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-pqfhj_openstack-operators(5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:32:40 crc kubenswrapper[4823]: E0121 17:32:40.624172 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" podUID="5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.166705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:41 crc kubenswrapper[4823]: E0121 17:32:41.166890 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:41 crc kubenswrapper[4823]: E0121 17:32:41.166981 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert podName:b57ac152-55e4-445b-be02-c74b9fe96905 nodeName:}" failed. No retries permitted until 2026-01-21 17:32:57.166958822 +0000 UTC m=+978.093089682 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert") pod "infra-operator-controller-manager-77c48c7859-5lr2z" (UID: "b57ac152-55e4-445b-be02-c74b9fe96905") : secret "infra-operator-webhook-server-cert" not found Jan 21 17:32:41 crc kubenswrapper[4823]: E0121 17:32:41.350991 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" podUID="5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.574481 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.587968 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7356c47-15be-48c6-a78e-5389b077d2c6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d\" (UID: \"b7356c47-15be-48c6-a78e-5389b077d2c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.616507 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5v79k" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.625126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.980650 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.981043 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:41 crc kubenswrapper[4823]: E0121 17:32:41.981308 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 17:32:41 crc kubenswrapper[4823]: E0121 17:32:41.981431 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs podName:2dcb7191-3fd3-435f-bf67-713b3953c63f nodeName:}" failed. No retries permitted until 2026-01-21 17:32:57.981401968 +0000 UTC m=+978.907532878 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs") pod "openstack-operator-controller-manager-5cbfc7c9bd-x94nn" (UID: "2dcb7191-3fd3-435f-bf67-713b3953c63f") : secret "webhook-server-cert" not found Jan 21 17:32:41 crc kubenswrapper[4823]: I0121 17:32:41.985616 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-metrics-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.345596 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.346125 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8p62w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-rxvvc_openstack-operators(ae942c2a-d0df-4bf2-8e76-ca95474ad50f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.347285 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" podUID="ae942c2a-d0df-4bf2-8e76-ca95474ad50f" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.370976 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" podUID="ae942c2a-d0df-4bf2-8e76-ca95474ad50f" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.922940 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.923196 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97tpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-pd8gs_openstack-operators(1282a2fd-37f7-4fd4-9c38-69b9c87f2910): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:32:44 crc kubenswrapper[4823]: E0121 17:32:44.924382 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" podUID="1282a2fd-37f7-4fd4-9c38-69b9c87f2910" Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.071043 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.071136 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.071197 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.072060 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcf0ff8adb2bfb185b4729793f83cb8f174b95d31f0510e681726cc4e1eb2380"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.072150 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://fcf0ff8adb2bfb185b4729793f83cb8f174b95d31f0510e681726cc4e1eb2380" gracePeriod=600 Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.380873 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="fcf0ff8adb2bfb185b4729793f83cb8f174b95d31f0510e681726cc4e1eb2380" exitCode=0 Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.380890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"fcf0ff8adb2bfb185b4729793f83cb8f174b95d31f0510e681726cc4e1eb2380"} Jan 21 17:32:45 crc kubenswrapper[4823]: I0121 17:32:45.380968 4823 scope.go:117] "RemoveContainer" containerID="01551426f5ae121d56576d8ebbe31d7f30aa5b3ef8f744c35e0d4bccddcab2b3" Jan 21 17:32:45 crc kubenswrapper[4823]: E0121 17:32:45.382638 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" podUID="1282a2fd-37f7-4fd4-9c38-69b9c87f2910" Jan 21 17:32:45 crc kubenswrapper[4823]: E0121 17:32:45.511874 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 17:32:45 crc kubenswrapper[4823]: E0121 17:32:45.512093 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kfjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-cr4d5_openstack-operators(6571a611-d208-419b-9304-5a6d6b8c1d1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:32:45 crc kubenswrapper[4823]: E0121 17:32:45.513328 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" podUID="6571a611-d208-419b-9304-5a6d6b8c1d1b" Jan 21 17:32:46 crc kubenswrapper[4823]: E0121 17:32:46.388743 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" podUID="6571a611-d208-419b-9304-5a6d6b8c1d1b" Jan 21 17:32:57 crc kubenswrapper[4823]: I0121 17:32:57.599314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:57 crc kubenswrapper[4823]: I0121 17:32:57.611587 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b57ac152-55e4-445b-be02-c74b9fe96905-cert\") pod \"infra-operator-controller-manager-77c48c7859-5lr2z\" (UID: \"b57ac152-55e4-445b-be02-c74b9fe96905\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:57 crc kubenswrapper[4823]: I0121 17:32:57.712900 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hnfx5" Jan 21 17:32:57 crc kubenswrapper[4823]: I0121 17:32:57.721061 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:32:58 crc kubenswrapper[4823]: I0121 17:32:58.004283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:58 crc kubenswrapper[4823]: I0121 17:32:58.008833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2dcb7191-3fd3-435f-bf67-713b3953c63f-webhook-certs\") pod \"openstack-operator-controller-manager-5cbfc7c9bd-x94nn\" (UID: \"2dcb7191-3fd3-435f-bf67-713b3953c63f\") " pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:32:58 crc kubenswrapper[4823]: I0121 17:32:58.168574 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zml9s" Jan 21 17:32:58 crc kubenswrapper[4823]: I0121 17:32:58.176911 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:33:05 crc kubenswrapper[4823]: E0121 17:33:05.149802 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 17:33:05 crc kubenswrapper[4823]: E0121 17:33:05.150388 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fhmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-h462s_openstack-operators(0d36d6af-75ff-4f6e-88aa-154e71609284): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:33:05 crc kubenswrapper[4823]: E0121 17:33:05.151578 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" podUID="0d36d6af-75ff-4f6e-88aa-154e71609284" Jan 21 17:33:05 crc kubenswrapper[4823]: E0121 17:33:05.691897 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" podUID="0d36d6af-75ff-4f6e-88aa-154e71609284" Jan 21 17:33:06 crc kubenswrapper[4823]: E0121 17:33:06.276538 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 17:33:06 crc kubenswrapper[4823]: E0121 17:33:06.277132 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kclxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-b4h92_openstack-operators(3b0d5663-0000-4ddf-bd72-893997a79681): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:33:06 crc kubenswrapper[4823]: E0121 17:33:06.278722 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" podUID="3b0d5663-0000-4ddf-bd72-893997a79681" Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.236690 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d"] Jan 21 17:33:07 crc kubenswrapper[4823]: W0121 17:33:07.259130 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7356c47_15be_48c6_a78e_5389b077d2c6.slice/crio-24a5645b334b1db0707ac1648889ebcb39482238d9a02e0cbe09e69186303235 WatchSource:0}: Error finding container 24a5645b334b1db0707ac1648889ebcb39482238d9a02e0cbe09e69186303235: Status 404 returned error can't find the container with id 24a5645b334b1db0707ac1648889ebcb39482238d9a02e0cbe09e69186303235 Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.526426 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn"] Jan 21 17:33:07 crc kubenswrapper[4823]: W0121 17:33:07.537685 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dcb7191_3fd3_435f_bf67_713b3953c63f.slice/crio-b0d39a850cf141c0b68981d12876fa2fc2922ef47525c97f1882c7b161fd1e41 WatchSource:0}: Error finding container b0d39a850cf141c0b68981d12876fa2fc2922ef47525c97f1882c7b161fd1e41: Status 404 returned error can't find the container with id b0d39a850cf141c0b68981d12876fa2fc2922ef47525c97f1882c7b161fd1e41 Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.645563 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z"] Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.718635 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"bf25751a26ff3c64f8ae67c52c13c550034b8f3dcc6b86f0b444f95206ccf684"} Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.724990 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" event={"ID":"b57ac152-55e4-445b-be02-c74b9fe96905","Type":"ContainerStarted","Data":"c5a316e4ab0d298f300665eca6db8abb1e181c8c32eccbe7d6df76e27d351622"} Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.726896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" event={"ID":"2dcb7191-3fd3-435f-bf67-713b3953c63f","Type":"ContainerStarted","Data":"b0d39a850cf141c0b68981d12876fa2fc2922ef47525c97f1882c7b161fd1e41"} Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.731311 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" event={"ID":"41648873-3abc-47a7-8c4d-8c3a15bdf09e","Type":"ContainerStarted","Data":"bc659ea2d9d12a9a9ee66f19ea734e57eeebef58c026faac4c5b34630a01d5d9"} Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.741340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" event={"ID":"9fe37a8e-6b17-4aad-8787-142c28faac52","Type":"ContainerStarted","Data":"93e86111fc6eeb8c9423c2a3d6f46301b5f88879cfd0a63142021d9a3375f04d"} Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.741767 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.743766 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" event={"ID":"b7356c47-15be-48c6-a78e-5389b077d2c6","Type":"ContainerStarted","Data":"24a5645b334b1db0707ac1648889ebcb39482238d9a02e0cbe09e69186303235"} Jan 21 17:33:07 crc kubenswrapper[4823]: I0121 17:33:07.782747 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" podStartSLOduration=4.541483612 podStartE2EDuration="43.782718592s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.272601726 +0000 UTC m=+947.198732586" lastFinishedPulling="2026-01-21 17:33:05.513836706 +0000 UTC m=+986.439967566" observedRunningTime="2026-01-21 17:33:07.77214002 +0000 UTC m=+988.698270880" watchObservedRunningTime="2026-01-21 17:33:07.782718592 +0000 UTC m=+988.708849452" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.769422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" event={"ID":"c05b436d-2d25-449f-a929-9424e4b6021f","Type":"ContainerStarted","Data":"848761294bfee83749b4f5d3f43acbd934db6e45d74d3eaae0283f2ba50a504b"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.770528 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.774824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" event={"ID":"4923d7a0-77b7-4d86-a6fc-fff0e9a81766","Type":"ContainerStarted","Data":"4c666aa9debea2e2fb420a85637daff481e8e99c59bffcbd67e1206b7ba7b1a4"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.775419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.778341 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" event={"ID":"33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf","Type":"ContainerStarted","Data":"dff41ee0859b48c86e2b6140aa7423780fa7157db194d97d65d690d76591e6cc"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.778786 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.780389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" event={"ID":"6571a611-d208-419b-9304-5a6d6b8c1d1b","Type":"ContainerStarted","Data":"74e355481754f1a1da41890111519d854ad55f74b20477bda6b5700b8b171cbf"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.780718 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.795740 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" podStartSLOduration=5.033524647 podStartE2EDuration="44.795726048s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.494508465 +0000 UTC m=+947.420639325" lastFinishedPulling="2026-01-21 17:33:06.256709856 +0000 UTC m=+987.182840726" observedRunningTime="2026-01-21 17:33:08.79499785 +0000 UTC m=+989.721128710" watchObservedRunningTime="2026-01-21 17:33:08.795726048 +0000 UTC m=+989.721856908" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.805456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" event={"ID":"1a2264de-b154-4669-aeae-fd1e71b29b0d","Type":"ContainerStarted","Data":"43beb916e76236c497699d2b96404a85fa8bfec62d45050a82f939c3f8d21459"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.806016 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.867605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" event={"ID":"ae942c2a-d0df-4bf2-8e76-ca95474ad50f","Type":"ContainerStarted","Data":"f3b2b9ac53227d04b15992b11827c74dabb9986a6f1ca37da8792dedd23f4c45"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.868429 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.869585 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" event={"ID":"49bc570b-b84d-48a3-b322-95b9ece80f26","Type":"ContainerStarted","Data":"65a07cbd558f93704454efef0841cf817fecb3401e66f62ff5e2d0e642ef7c1e"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.869987 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.870871 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" event={"ID":"2a4a8662-2615-4cf3-950d-2602ec921aaf","Type":"ContainerStarted","Data":"65cfcdf6fdb63379795dd2429fa34a568d5b69d8924329b0ec68a557471c175d"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.871203 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.872204 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" event={"ID":"57081a15-e11e-4d49-b516-3f8ccabea011","Type":"ContainerStarted","Data":"63e1e8c610f0a46eac68d5b5cb1be38df09d5b8844b722cdf9a759064c249443"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.872568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.883417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" event={"ID":"94b91e49-7b7f-4e7a-bcdf-31d847d8c517","Type":"ContainerStarted","Data":"f933d6768a9feb62e0f3465bec4c8eff2421eb1b6a8d1a9e048a1872dae1fbf0"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.884231 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.885546 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" event={"ID":"9819367b-7d70-48cf-bdde-0c0e2ccf5fbd","Type":"ContainerStarted","Data":"ad7606303a444bc1356c410003e633974d02af88514b95394e09ee6aa2b8a829"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.885931 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.887262 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" event={"ID":"b6b63b33-5320-4cc6-a4b8-c359e19cdfef","Type":"ContainerStarted","Data":"3ebf37bee6219d5c75b4eade1b40209532ea6105a79af5303bf41f00a53350d1"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.887775 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.891683 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" event={"ID":"1282a2fd-37f7-4fd4-9c38-69b9c87f2910","Type":"ContainerStarted","Data":"24f5e4596509638a734fa2883133e9718c49dd8ccb78d381bba9324941b3bfa4"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.892457 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.898278 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" event={"ID":"29b92151-85da-4df7-a4f1-c8fb7e08a4cf","Type":"ContainerStarted","Data":"a1973297e38b2193f27b83afbc6b0b51ac7be6ecf9579a10a42babf857855f8d"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.898930 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.913822 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" event={"ID":"5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128","Type":"ContainerStarted","Data":"9dae99f08a4f8b9163d889e12060208e93e709d3c5337fbfbae94629bff87c6c"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.914371 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.930653 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" event={"ID":"85830ef7-1db0-47bf-b03f-0720fceda12b","Type":"ContainerStarted","Data":"033bf995d7bed9c25bf3ffd8e13eafb413064ed22b5a86e0167ab952907553d9"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.930790 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.932981 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" event={"ID":"2dcb7191-3fd3-435f-bf67-713b3953c63f","Type":"ContainerStarted","Data":"851ac70e6ba787d4db84a43e7296e8274f31dfa12f450c02e4659450316d712a"} Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.933007 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:33:08 crc kubenswrapper[4823]: I0121 17:33:08.933312 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:33:09 crc kubenswrapper[4823]: I0121 17:33:09.120159 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" podStartSLOduration=5.353175867 podStartE2EDuration="45.120137185s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.491414619 +0000 UTC m=+947.417545479" lastFinishedPulling="2026-01-21 17:33:06.258375937 +0000 UTC m=+987.184506797" observedRunningTime="2026-01-21 17:33:09.006043995 +0000 UTC m=+989.932174855" watchObservedRunningTime="2026-01-21 17:33:09.120137185 +0000 UTC m=+990.046268045" Jan 21 17:33:09 crc kubenswrapper[4823]: I0121 17:33:09.120602 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" podStartSLOduration=4.363443938 podStartE2EDuration="44.120595637s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.268106542 +0000 UTC m=+948.194237402" lastFinishedPulling="2026-01-21 17:33:07.025258241 +0000 UTC m=+987.951389101" observedRunningTime="2026-01-21 17:33:09.108272262 +0000 UTC m=+990.034403122" watchObservedRunningTime="2026-01-21 17:33:09.120595637 +0000 UTC m=+990.046726497" Jan 21 17:33:09 crc kubenswrapper[4823]: I0121 17:33:09.211588 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" podStartSLOduration=4.866719844 podStartE2EDuration="44.211573245s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.67877871 +0000 UTC m=+948.604909580" lastFinishedPulling="2026-01-21 17:33:07.023632121 +0000 UTC m=+987.949762981" observedRunningTime="2026-01-21 17:33:09.209988316 +0000 UTC m=+990.136119176" watchObservedRunningTime="2026-01-21 17:33:09.211573245 +0000 UTC m=+990.137704105" Jan 21 17:33:09 crc kubenswrapper[4823]: I0121 17:33:09.517086 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" podStartSLOduration=5.352021439 podStartE2EDuration="45.517063815s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.092114971 +0000 UTC m=+947.018245831" lastFinishedPulling="2026-01-21 17:33:06.257157347 +0000 UTC m=+987.183288207" observedRunningTime="2026-01-21 17:33:09.506954175 +0000 UTC m=+990.433085045" watchObservedRunningTime="2026-01-21 17:33:09.517063815 +0000 UTC m=+990.443194675" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.005474 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" podStartSLOduration=45.005453656 podStartE2EDuration="45.005453656s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:33:09.74019641 +0000 UTC m=+990.666327280" watchObservedRunningTime="2026-01-21 17:33:10.005453656 +0000 UTC m=+990.931584516" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.038439 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" podStartSLOduration=6.377441201 podStartE2EDuration="45.038418081s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.602262968 +0000 UTC m=+948.528393828" lastFinishedPulling="2026-01-21 17:33:06.263239848 +0000 UTC m=+987.189370708" observedRunningTime="2026-01-21 17:33:10.035608211 +0000 UTC m=+990.961739071" watchObservedRunningTime="2026-01-21 17:33:10.038418081 +0000 UTC m=+990.964548941" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.041776 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" podStartSLOduration=6.08594346 podStartE2EDuration="45.041758923s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.300731789 +0000 UTC m=+948.226862649" lastFinishedPulling="2026-01-21 17:33:06.256547252 +0000 UTC m=+987.182678112" observedRunningTime="2026-01-21 17:33:10.001091068 +0000 UTC m=+990.927221928" watchObservedRunningTime="2026-01-21 17:33:10.041758923 +0000 UTC m=+990.967889783" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.083816 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" podStartSLOduration=5.857800205 podStartE2EDuration="46.083786992s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.798784542 +0000 UTC m=+947.724915402" lastFinishedPulling="2026-01-21 17:33:07.024771329 +0000 UTC m=+987.950902189" observedRunningTime="2026-01-21 17:33:10.078535312 +0000 UTC m=+991.004666172" watchObservedRunningTime="2026-01-21 17:33:10.083786992 +0000 UTC m=+991.009917852" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.207703 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" podStartSLOduration=6.865540474 podStartE2EDuration="46.207688784s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.917571071 +0000 UTC m=+947.843701931" lastFinishedPulling="2026-01-21 17:33:06.259719371 +0000 UTC m=+987.185850241" observedRunningTime="2026-01-21 17:33:10.199271636 +0000 UTC m=+991.125402496" watchObservedRunningTime="2026-01-21 17:33:10.207688784 +0000 UTC m=+991.133819644" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.265146 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" podStartSLOduration=6.422815308 podStartE2EDuration="45.265124504s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.414458542 +0000 UTC m=+948.340589412" lastFinishedPulling="2026-01-21 17:33:06.256767748 +0000 UTC m=+987.182898608" observedRunningTime="2026-01-21 17:33:10.259009063 +0000 UTC m=+991.185139923" watchObservedRunningTime="2026-01-21 17:33:10.265124504 +0000 UTC m=+991.191255354" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.356978 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" podStartSLOduration=6.371632043 podStartE2EDuration="46.356956063s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.273048847 +0000 UTC m=+947.199179707" lastFinishedPulling="2026-01-21 17:33:06.258372867 +0000 UTC m=+987.184503727" observedRunningTime="2026-01-21 17:33:10.290254795 +0000 UTC m=+991.216385645" watchObservedRunningTime="2026-01-21 17:33:10.356956063 +0000 UTC m=+991.283086923" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.358954 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" podStartSLOduration=7.095689925 podStartE2EDuration="46.358940183s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.000076452 +0000 UTC m=+947.926207312" lastFinishedPulling="2026-01-21 17:33:06.26332671 +0000 UTC m=+987.189457570" observedRunningTime="2026-01-21 17:33:10.316899433 +0000 UTC m=+991.243030293" watchObservedRunningTime="2026-01-21 17:33:10.358940183 +0000 UTC m=+991.285071043" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.367239 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" podStartSLOduration=7.065876966 podStartE2EDuration="46.367222057s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.962600414 +0000 UTC m=+947.888731274" lastFinishedPulling="2026-01-21 17:33:06.263945515 +0000 UTC m=+987.190076365" observedRunningTime="2026-01-21 17:33:10.33373997 +0000 UTC m=+991.259870830" watchObservedRunningTime="2026-01-21 17:33:10.367222057 +0000 UTC m=+991.293352917" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.372545 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" podStartSLOduration=5.612613131 podStartE2EDuration="45.372530338s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.263459307 +0000 UTC m=+948.189590177" lastFinishedPulling="2026-01-21 17:33:07.023376524 +0000 UTC m=+987.949507384" observedRunningTime="2026-01-21 17:33:10.360342957 +0000 UTC m=+991.286473817" watchObservedRunningTime="2026-01-21 17:33:10.372530338 +0000 UTC m=+991.298661188" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.427614 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" podStartSLOduration=6.166461213 podStartE2EDuration="45.427591509s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:26.997167269 +0000 UTC m=+947.923298129" lastFinishedPulling="2026-01-21 17:33:06.258297565 +0000 UTC m=+987.184428425" observedRunningTime="2026-01-21 17:33:10.405971665 +0000 UTC m=+991.332102525" watchObservedRunningTime="2026-01-21 17:33:10.427591509 +0000 UTC m=+991.353722369" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.438871 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" podStartSLOduration=5.79820377 podStartE2EDuration="45.438836097s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.382813739 +0000 UTC m=+948.308944599" lastFinishedPulling="2026-01-21 17:33:07.023446066 +0000 UTC m=+987.949576926" observedRunningTime="2026-01-21 17:33:10.434238864 +0000 UTC m=+991.360369724" watchObservedRunningTime="2026-01-21 17:33:10.438836097 +0000 UTC m=+991.364966957" Jan 21 17:33:10 crc kubenswrapper[4823]: I0121 17:33:10.465743 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" podStartSLOduration=6.855716953 podStartE2EDuration="45.465723262s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.646565534 +0000 UTC m=+948.572696404" lastFinishedPulling="2026-01-21 17:33:06.256571853 +0000 UTC m=+987.182702713" observedRunningTime="2026-01-21 17:33:10.462299787 +0000 UTC m=+991.388430647" watchObservedRunningTime="2026-01-21 17:33:10.465723262 +0000 UTC m=+991.391854122" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.186031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-z8pp2" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.188803 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-r6txm" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.239031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-2b4l6" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.296181 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hx6mm" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.356145 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5srsl" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.522716 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-pd8gs" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.648839 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-8b4ft" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.676525 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-thq22" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.691432 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-56n9m" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.742888 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.826627 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pqfhj" Jan 21 17:33:15 crc kubenswrapper[4823]: I0121 17:33:15.865232 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-rxvvc" Jan 21 17:33:16 crc kubenswrapper[4823]: I0121 17:33:16.024600 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-cr4d5" Jan 21 17:33:16 crc kubenswrapper[4823]: I0121 17:33:16.048207 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lds9d" Jan 21 17:33:16 crc kubenswrapper[4823]: I0121 17:33:16.072044 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-52tkk" Jan 21 17:33:16 crc kubenswrapper[4823]: I0121 17:33:16.098468 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9qqbg" Jan 21 17:33:16 crc kubenswrapper[4823]: I0121 17:33:16.225487 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-msnsz" Jan 21 17:33:16 crc kubenswrapper[4823]: I0121 17:33:16.340205 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6cdc5b758-zbxpz" Jan 21 17:33:17 crc kubenswrapper[4823]: I0121 17:33:17.064008 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" event={"ID":"b7356c47-15be-48c6-a78e-5389b077d2c6","Type":"ContainerStarted","Data":"35edb0e6c44b26b68ab9054f12c8062bbecc08cbfbb972de5e3d0adb272ec7bb"} Jan 21 17:33:17 crc kubenswrapper[4823]: I0121 17:33:17.064331 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:33:17 crc kubenswrapper[4823]: I0121 17:33:17.065872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" event={"ID":"b57ac152-55e4-445b-be02-c74b9fe96905","Type":"ContainerStarted","Data":"75f291424388f82f71fb0250531dce726640618af67272cc8ee0333e0d702e37"} Jan 21 17:33:17 crc kubenswrapper[4823]: I0121 17:33:17.066052 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:33:17 crc kubenswrapper[4823]: I0121 17:33:17.102602 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" podStartSLOduration=42.76036709 podStartE2EDuration="52.102582391s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:33:07.275501096 +0000 UTC m=+988.201631946" lastFinishedPulling="2026-01-21 17:33:16.617716387 +0000 UTC m=+997.543847247" observedRunningTime="2026-01-21 17:33:17.093160818 +0000 UTC m=+998.019291688" watchObservedRunningTime="2026-01-21 17:33:17.102582391 +0000 UTC m=+998.028713251" Jan 21 17:33:17 crc kubenswrapper[4823]: I0121 17:33:17.120175 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" podStartSLOduration=44.17594938 podStartE2EDuration="53.120151585s" podCreationTimestamp="2026-01-21 17:32:24 +0000 UTC" firstStartedPulling="2026-01-21 17:33:07.69202106 +0000 UTC m=+988.618151920" lastFinishedPulling="2026-01-21 17:33:16.636223265 +0000 UTC m=+997.562354125" observedRunningTime="2026-01-21 17:33:17.119085589 +0000 UTC m=+998.045216459" watchObservedRunningTime="2026-01-21 17:33:17.120151585 +0000 UTC m=+998.046282445" Jan 21 17:33:18 crc kubenswrapper[4823]: I0121 17:33:18.183086 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cbfc7c9bd-x94nn" Jan 21 17:33:20 crc kubenswrapper[4823]: I0121 17:33:20.091540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" event={"ID":"0d36d6af-75ff-4f6e-88aa-154e71609284","Type":"ContainerStarted","Data":"98860334d58b4fdaa31fb0b0e0e2323b80aea7ad322342cf104021f0291463db"} Jan 21 17:33:20 crc kubenswrapper[4823]: I0121 17:33:20.118045 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h462s" podStartSLOduration=3.153601609 podStartE2EDuration="55.118017537s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.78871586 +0000 UTC m=+948.714846720" lastFinishedPulling="2026-01-21 17:33:19.753131788 +0000 UTC m=+1000.679262648" observedRunningTime="2026-01-21 17:33:20.108986293 +0000 UTC m=+1001.035117163" watchObservedRunningTime="2026-01-21 17:33:20.118017537 +0000 UTC m=+1001.044148397" Jan 21 17:33:20 crc kubenswrapper[4823]: E0121 17:33:20.346667 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" podUID="3b0d5663-0000-4ddf-bd72-893997a79681" Jan 21 17:33:21 crc kubenswrapper[4823]: I0121 17:33:21.633111 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d" Jan 21 17:33:27 crc kubenswrapper[4823]: I0121 17:33:27.727592 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-5lr2z" Jan 21 17:33:33 crc kubenswrapper[4823]: I0121 17:33:33.178633 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" event={"ID":"3b0d5663-0000-4ddf-bd72-893997a79681","Type":"ContainerStarted","Data":"455bc2f59a4be985b5578f6a3c60c946841432dc917566a9ab798b8e96e5d101"} Jan 21 17:33:33 crc kubenswrapper[4823]: I0121 17:33:33.179385 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:33:33 crc kubenswrapper[4823]: I0121 17:33:33.198902 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" podStartSLOduration=3.125539313 podStartE2EDuration="1m8.198882814s" podCreationTimestamp="2026-01-21 17:32:25 +0000 UTC" firstStartedPulling="2026-01-21 17:32:27.667282696 +0000 UTC m=+948.593413556" lastFinishedPulling="2026-01-21 17:33:32.740626177 +0000 UTC m=+1013.666757057" observedRunningTime="2026-01-21 17:33:33.193926581 +0000 UTC m=+1014.120057441" watchObservedRunningTime="2026-01-21 17:33:33.198882814 +0000 UTC m=+1014.125013674" Jan 21 17:33:46 crc kubenswrapper[4823]: I0121 17:33:46.173988 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-b4h92" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.434113 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2mtl"] Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.440675 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.443228 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7rggj" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.443533 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.445260 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.445442 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.448804 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2mtl"] Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.486503 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf023b91-8240-4311-a8cb-185f4f0f75db-config\") pod \"dnsmasq-dns-675f4bcbfc-f2mtl\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.486618 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdmd\" (UniqueName: \"kubernetes.io/projected/cf023b91-8240-4311-a8cb-185f4f0f75db-kube-api-access-qtdmd\") pod \"dnsmasq-dns-675f4bcbfc-f2mtl\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.509987 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sv88h"] Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.514899 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.517783 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.523690 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sv88h"] Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.587876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf023b91-8240-4311-a8cb-185f4f0f75db-config\") pod \"dnsmasq-dns-675f4bcbfc-f2mtl\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.587945 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.588035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgf8\" (UniqueName: \"kubernetes.io/projected/ce644b68-c368-42dd-b834-d86cee38759d-kube-api-access-rkgf8\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.588071 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtdmd\" (UniqueName: \"kubernetes.io/projected/cf023b91-8240-4311-a8cb-185f4f0f75db-kube-api-access-qtdmd\") pod \"dnsmasq-dns-675f4bcbfc-f2mtl\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.588107 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-config\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.589104 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf023b91-8240-4311-a8cb-185f4f0f75db-config\") pod \"dnsmasq-dns-675f4bcbfc-f2mtl\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.614382 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtdmd\" (UniqueName: \"kubernetes.io/projected/cf023b91-8240-4311-a8cb-185f4f0f75db-kube-api-access-qtdmd\") pod \"dnsmasq-dns-675f4bcbfc-f2mtl\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.689404 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.689511 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgf8\" (UniqueName: \"kubernetes.io/projected/ce644b68-c368-42dd-b834-d86cee38759d-kube-api-access-rkgf8\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.689533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-config\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.690445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-config\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.690627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.706103 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgf8\" (UniqueName: \"kubernetes.io/projected/ce644b68-c368-42dd-b834-d86cee38759d-kube-api-access-rkgf8\") pod \"dnsmasq-dns-78dd6ddcc-sv88h\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.768991 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:07 crc kubenswrapper[4823]: I0121 17:34:07.832803 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:08 crc kubenswrapper[4823]: I0121 17:34:08.241736 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2mtl"] Jan 21 17:34:08 crc kubenswrapper[4823]: I0121 17:34:08.346758 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sv88h"] Jan 21 17:34:08 crc kubenswrapper[4823]: W0121 17:34:08.349071 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce644b68_c368_42dd_b834_d86cee38759d.slice/crio-27567ff42119ce363f11762f028daf9911ef0ca64f34cdb6128b6acc82e138c5 WatchSource:0}: Error finding container 27567ff42119ce363f11762f028daf9911ef0ca64f34cdb6128b6acc82e138c5: Status 404 returned error can't find the container with id 27567ff42119ce363f11762f028daf9911ef0ca64f34cdb6128b6acc82e138c5 Jan 21 17:34:08 crc kubenswrapper[4823]: I0121 17:34:08.420820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" event={"ID":"cf023b91-8240-4311-a8cb-185f4f0f75db","Type":"ContainerStarted","Data":"2e79dec38d8e97100e0c0eb2b1e1a6fbad9e2317846a5ca59dddb38bd6bbbcab"} Jan 21 17:34:08 crc kubenswrapper[4823]: I0121 17:34:08.421791 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" event={"ID":"ce644b68-c368-42dd-b834-d86cee38759d","Type":"ContainerStarted","Data":"27567ff42119ce363f11762f028daf9911ef0ca64f34cdb6128b6acc82e138c5"} Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.330092 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2mtl"] Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.363923 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvw84"] Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.365667 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.377894 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvw84"] Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.386320 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.386597 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hg9f\" (UniqueName: \"kubernetes.io/projected/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-kube-api-access-8hg9f\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.386701 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-config\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.488351 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.488454 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hg9f\" (UniqueName: \"kubernetes.io/projected/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-kube-api-access-8hg9f\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.488487 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-config\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.489482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-config\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.489995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.532239 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hg9f\" (UniqueName: \"kubernetes.io/projected/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-kube-api-access-8hg9f\") pod \"dnsmasq-dns-666b6646f7-xvw84\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.695694 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.764321 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sv88h"] Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.816544 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zs99j"] Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.819253 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.841254 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zs99j"] Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.870040 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-config\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.870145 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9cz\" (UniqueName: \"kubernetes.io/projected/6b467302-1643-4309-a17c-7d54d3307272-kube-api-access-zn9cz\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:10 crc kubenswrapper[4823]: I0121 17:34:10.870226 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.044044 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-config\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.044139 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9cz\" (UniqueName: \"kubernetes.io/projected/6b467302-1643-4309-a17c-7d54d3307272-kube-api-access-zn9cz\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.044223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.045542 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.046334 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-config\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.084888 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9cz\" (UniqueName: \"kubernetes.io/projected/6b467302-1643-4309-a17c-7d54d3307272-kube-api-access-zn9cz\") pod \"dnsmasq-dns-57d769cc4f-zs99j\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.141811 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.478637 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.480182 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.489440 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.492988 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.493206 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.493474 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.493674 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.493940 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.497328 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8pdgl" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.518439 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6jht\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-kube-api-access-p6jht\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661650 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661688 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661710 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661760 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-config-data\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661818 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.661894 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6jht\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-kube-api-access-p6jht\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769819 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769842 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769928 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-config-data\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769948 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769964 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.769986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.770024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.770040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.770070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.771302 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.771631 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-config-data\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.771870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.772147 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.772787 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.773263 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.798194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.799890 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.801228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.802808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.823821 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6jht\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-kube-api-access-p6jht\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.839178 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " pod="openstack/rabbitmq-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.915512 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvw84"] Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.933338 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zs99j"] Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.978781 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.980096 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.982902 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bhgxm" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.990504 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.990693 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.990831 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.990922 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.991038 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 17:34:11 crc kubenswrapper[4823]: I0121 17:34:11.991167 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.008873 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.110105 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184113 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184135 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dln\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-kube-api-access-95dln\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184171 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184199 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184222 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184249 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184309 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184366 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.184393 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286711 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286761 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286818 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dln\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-kube-api-access-95dln\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286908 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286933 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.286963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.287001 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.287037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.287084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.295515 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.296907 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.297309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.297501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.298205 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.298480 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.298639 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.308874 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.309581 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.312072 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.330975 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dln\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-kube-api-access-95dln\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.333332 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.349606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.438049 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:34:12 crc kubenswrapper[4823]: W0121 17:34:12.441998 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619d3aad_c1a1_4d30_ac6f_a0b9535371dc.slice/crio-f9837f691cfa99ff8cda01afeea49c100bc9c3e83542ce93e3d4b74d81f7a8f7 WatchSource:0}: Error finding container f9837f691cfa99ff8cda01afeea49c100bc9c3e83542ce93e3d4b74d81f7a8f7: Status 404 returned error can't find the container with id f9837f691cfa99ff8cda01afeea49c100bc9c3e83542ce93e3d4b74d81f7a8f7 Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.484143 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"619d3aad-c1a1-4d30-ac6f-a0b9535371dc","Type":"ContainerStarted","Data":"f9837f691cfa99ff8cda01afeea49c100bc9c3e83542ce93e3d4b74d81f7a8f7"} Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.485471 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" event={"ID":"f7dc33bd-09be-4aca-98c5-3d7b69dd0249","Type":"ContainerStarted","Data":"7321d11bfcb1880b4c27a93b373d68de297d2f8abf1b4cf8a491698d6ec426e3"} Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.487000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" event={"ID":"6b467302-1643-4309-a17c-7d54d3307272","Type":"ContainerStarted","Data":"7944bd5baae4baeba6475ee4fb94e7903f531a745973f982bc3de4941baa40ef"} Jan 21 17:34:12 crc kubenswrapper[4823]: W0121 17:34:12.743992 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dd8ea30_a041_4ce6_8a36_b8a355b076dc.slice/crio-84f86a4d0e95ecb61b3e5d4a613052d9019a0e5e52893cf676b0681ac6e79f48 WatchSource:0}: Error finding container 84f86a4d0e95ecb61b3e5d4a613052d9019a0e5e52893cf676b0681ac6e79f48: Status 404 returned error can't find the container with id 84f86a4d0e95ecb61b3e5d4a613052d9019a0e5e52893cf676b0681ac6e79f48 Jan 21 17:34:12 crc kubenswrapper[4823]: I0121 17:34:12.747028 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.095840 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.103154 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.112871 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-68qc5" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.113785 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.114028 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.114047 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.114745 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.119584 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220469 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-operator-scripts\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220538 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrg64\" (UniqueName: \"kubernetes.io/projected/350b339b-c723-4ff3-ab95-83e82c6c4d52-kube-api-access-qrg64\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220588 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220643 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/350b339b-c723-4ff3-ab95-83e82c6c4d52-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220670 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-kolla-config\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220718 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350b339b-c723-4ff3-ab95-83e82c6c4d52-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220746 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-config-data-default\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.220787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/350b339b-c723-4ff3-ab95-83e82c6c4d52-config-data-generated\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrg64\" (UniqueName: \"kubernetes.io/projected/350b339b-c723-4ff3-ab95-83e82c6c4d52-kube-api-access-qrg64\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322654 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322708 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/350b339b-c723-4ff3-ab95-83e82c6c4d52-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-kolla-config\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322769 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350b339b-c723-4ff3-ab95-83e82c6c4d52-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-config-data-default\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322835 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/350b339b-c723-4ff3-ab95-83e82c6c4d52-config-data-generated\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.322874 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-operator-scripts\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.324455 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-operator-scripts\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.325122 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.327912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/350b339b-c723-4ff3-ab95-83e82c6c4d52-config-data-generated\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.328594 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-kolla-config\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.328713 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/350b339b-c723-4ff3-ab95-83e82c6c4d52-config-data-default\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.342083 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.350876 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/350b339b-c723-4ff3-ab95-83e82c6c4d52-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.351367 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350b339b-c723-4ff3-ab95-83e82c6c4d52-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.392568 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrg64\" (UniqueName: \"kubernetes.io/projected/350b339b-c723-4ff3-ab95-83e82c6c4d52-kube-api-access-qrg64\") pod \"openstack-galera-0\" (UID: \"350b339b-c723-4ff3-ab95-83e82c6c4d52\") " pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.517268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 17:34:13 crc kubenswrapper[4823]: I0121 17:34:13.534043 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dd8ea30-a041-4ce6-8a36-b8a355b076dc","Type":"ContainerStarted","Data":"84f86a4d0e95ecb61b3e5d4a613052d9019a0e5e52893cf676b0681ac6e79f48"} Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.416541 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.418072 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.429090 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.430651 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-b8x8t" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.430949 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.431205 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.463951 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.470208 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.471265 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.473603 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.473792 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jnbw7" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.476947 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.518705 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601579 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/111d0907-497e-401f-a017-76534940920e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601669 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601701 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb101cd-8034-422f-9016-d0baa0d9513b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601744 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111d0907-497e-401f-a017-76534940920e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601772 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601810 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpdn\" (UniqueName: \"kubernetes.io/projected/dbb101cd-8034-422f-9016-d0baa0d9513b-kube-api-access-kqpdn\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601834 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb101cd-8034-422f-9016-d0baa0d9513b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.601977 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/111d0907-497e-401f-a017-76534940920e-kolla-config\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.602016 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj22m\" (UniqueName: \"kubernetes.io/projected/111d0907-497e-401f-a017-76534940920e-kube-api-access-vj22m\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.602067 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb101cd-8034-422f-9016-d0baa0d9513b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.602098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/111d0907-497e-401f-a017-76534940920e-config-data\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.602119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705576 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/111d0907-497e-401f-a017-76534940920e-kolla-config\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj22m\" (UniqueName: \"kubernetes.io/projected/111d0907-497e-401f-a017-76534940920e-kube-api-access-vj22m\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb101cd-8034-422f-9016-d0baa0d9513b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705738 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/111d0907-497e-401f-a017-76534940920e-config-data\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705774 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/111d0907-497e-401f-a017-76534940920e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705811 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705836 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb101cd-8034-422f-9016-d0baa0d9513b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705887 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111d0907-497e-401f-a017-76534940920e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705950 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqpdn\" (UniqueName: \"kubernetes.io/projected/dbb101cd-8034-422f-9016-d0baa0d9513b-kube-api-access-kqpdn\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.705975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.706009 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb101cd-8034-422f-9016-d0baa0d9513b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.709868 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.709907 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/111d0907-497e-401f-a017-76534940920e-config-data\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.710232 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.710472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/111d0907-497e-401f-a017-76534940920e-kolla-config\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.710707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb101cd-8034-422f-9016-d0baa0d9513b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.710893 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.711416 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/111d0907-497e-401f-a017-76534940920e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.713813 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111d0907-497e-401f-a017-76534940920e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.715176 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb101cd-8034-422f-9016-d0baa0d9513b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.728755 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb101cd-8034-422f-9016-d0baa0d9513b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.730076 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb101cd-8034-422f-9016-d0baa0d9513b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.740121 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj22m\" (UniqueName: \"kubernetes.io/projected/111d0907-497e-401f-a017-76534940920e-kube-api-access-vj22m\") pod \"memcached-0\" (UID: \"111d0907-497e-401f-a017-76534940920e\") " pod="openstack/memcached-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.751031 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.754038 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqpdn\" (UniqueName: \"kubernetes.io/projected/dbb101cd-8034-422f-9016-d0baa0d9513b-kube-api-access-kqpdn\") pod \"openstack-cell1-galera-0\" (UID: \"dbb101cd-8034-422f-9016-d0baa0d9513b\") " pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:14 crc kubenswrapper[4823]: I0121 17:34:14.797647 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 17:34:15 crc kubenswrapper[4823]: I0121 17:34:15.046397 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.549438 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.552994 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.557986 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-88rs2" Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.564924 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.678553 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756mp\" (UniqueName: \"kubernetes.io/projected/d465f8f2-aad6-47d6-8887-ee38d8b846ac-kube-api-access-756mp\") pod \"kube-state-metrics-0\" (UID: \"d465f8f2-aad6-47d6-8887-ee38d8b846ac\") " pod="openstack/kube-state-metrics-0" Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.808862 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756mp\" (UniqueName: \"kubernetes.io/projected/d465f8f2-aad6-47d6-8887-ee38d8b846ac-kube-api-access-756mp\") pod \"kube-state-metrics-0\" (UID: \"d465f8f2-aad6-47d6-8887-ee38d8b846ac\") " pod="openstack/kube-state-metrics-0" Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.841359 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756mp\" (UniqueName: \"kubernetes.io/projected/d465f8f2-aad6-47d6-8887-ee38d8b846ac-kube-api-access-756mp\") pod \"kube-state-metrics-0\" (UID: \"d465f8f2-aad6-47d6-8887-ee38d8b846ac\") " pod="openstack/kube-state-metrics-0" Jan 21 17:34:16 crc kubenswrapper[4823]: I0121 17:34:16.885334 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.815020 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.821539 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.825973 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.827950 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.828613 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.828745 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hndk8" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.828988 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.829021 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.829260 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.829458 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.829647 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926424 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926487 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926565 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926665 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshlk\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-kube-api-access-qshlk\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.926963 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.927192 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.927265 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:17 crc kubenswrapper[4823]: I0121 17:34:17.927289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.028883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.028957 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029020 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshlk\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-kube-api-access-qshlk\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029124 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029179 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029225 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029253 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.029303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.146534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.164591 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.180055 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.180570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.212160 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.212262 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3bf7d8e6071accb44cb216af941703c855ece813d1c3a48f9936e31f1ede18e7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.250790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.301871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.302770 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.315585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.334637 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshlk\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-kube-api-access-qshlk\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.399741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:18 crc kubenswrapper[4823]: I0121 17:34:18.482129 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.741267 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bfdrb"] Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.748928 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.754784 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bfdrb"] Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.762802 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.763117 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zfxft" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.763516 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.786761 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ht698"] Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.788891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.795593 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ht698"] Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-run\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938093 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80f6b9ff-4282-4118-8a46-acdae16c9a3d-scripts\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938113 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9588aa19-b204-450e-a781-2b3d119bd86e-combined-ca-bundle\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938139 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-run-ovn\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938176 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-run\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzcn6\" (UniqueName: \"kubernetes.io/projected/80f6b9ff-4282-4118-8a46-acdae16c9a3d-kube-api-access-gzcn6\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938233 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-lib\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938256 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5rxm\" (UniqueName: \"kubernetes.io/projected/9588aa19-b204-450e-a781-2b3d119bd86e-kube-api-access-x5rxm\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938317 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9588aa19-b204-450e-a781-2b3d119bd86e-ovn-controller-tls-certs\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938340 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-log-ovn\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-etc-ovs\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938377 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-log\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:19 crc kubenswrapper[4823]: I0121 17:34:19.938397 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9588aa19-b204-450e-a781-2b3d119bd86e-scripts\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9588aa19-b204-450e-a781-2b3d119bd86e-ovn-controller-tls-certs\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-log-ovn\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040452 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-etc-ovs\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040506 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-log\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9588aa19-b204-450e-a781-2b3d119bd86e-scripts\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-run\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040595 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80f6b9ff-4282-4118-8a46-acdae16c9a3d-scripts\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9588aa19-b204-450e-a781-2b3d119bd86e-combined-ca-bundle\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040628 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-run-ovn\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-run\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzcn6\" (UniqueName: \"kubernetes.io/projected/80f6b9ff-4282-4118-8a46-acdae16c9a3d-kube-api-access-gzcn6\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-lib\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.040758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5rxm\" (UniqueName: \"kubernetes.io/projected/9588aa19-b204-450e-a781-2b3d119bd86e-kube-api-access-x5rxm\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041117 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-log-ovn\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041351 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-run\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041509 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-lib\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9588aa19-b204-450e-a781-2b3d119bd86e-var-run-ovn\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041691 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-etc-ovs\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041683 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-log\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.041741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80f6b9ff-4282-4118-8a46-acdae16c9a3d-var-run\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.043681 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9588aa19-b204-450e-a781-2b3d119bd86e-scripts\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.051018 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9588aa19-b204-450e-a781-2b3d119bd86e-combined-ca-bundle\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.059398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzcn6\" (UniqueName: \"kubernetes.io/projected/80f6b9ff-4282-4118-8a46-acdae16c9a3d-kube-api-access-gzcn6\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.061380 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5rxm\" (UniqueName: \"kubernetes.io/projected/9588aa19-b204-450e-a781-2b3d119bd86e-kube-api-access-x5rxm\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.073821 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9588aa19-b204-450e-a781-2b3d119bd86e-ovn-controller-tls-certs\") pod \"ovn-controller-bfdrb\" (UID: \"9588aa19-b204-450e-a781-2b3d119bd86e\") " pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.075368 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.075653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80f6b9ff-4282-4118-8a46-acdae16c9a3d-scripts\") pod \"ovn-controller-ovs-ht698\" (UID: \"80f6b9ff-4282-4118-8a46-acdae16c9a3d\") " pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:20 crc kubenswrapper[4823]: I0121 17:34:20.114627 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.126313 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.128603 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.130159 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.134180 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.135264 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.135467 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mw5nb" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.135361 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.153604 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.288841 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w278d\" (UniqueName: \"kubernetes.io/projected/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-kube-api-access-w278d\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289121 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289272 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289376 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289426 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289537 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.289609 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-config\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391413 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w278d\" (UniqueName: \"kubernetes.io/projected/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-kube-api-access-w278d\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391492 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391588 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.391674 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-config\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.392522 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.392700 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-config\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.392782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.393366 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.398113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.399541 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.415399 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w278d\" (UniqueName: \"kubernetes.io/projected/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-kube-api-access-w278d\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.416407 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.419664 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.463572 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.954588 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.957188 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.959638 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-q4hgj" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.960338 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.960930 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.963210 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 17:34:23 crc kubenswrapper[4823]: I0121 17:34:23.965560 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.106645 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.106739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.106793 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.108125 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfh77\" (UniqueName: \"kubernetes.io/projected/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-kube-api-access-dfh77\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.108186 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.108218 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.108368 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.108441 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.209890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.209962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.209990 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.210015 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.210041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfh77\" (UniqueName: \"kubernetes.io/projected/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-kube-api-access-dfh77\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.210066 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.210089 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.210150 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.211504 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.211543 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.211587 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.213074 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.215651 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.216573 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.219065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.231024 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfh77\" (UniqueName: \"kubernetes.io/projected/c2e95955-387e-4ec0-a5b4-25a41b7cf9c9-kube-api-access-dfh77\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.239602 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:24 crc kubenswrapper[4823]: I0121 17:34:24.287382 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.500582 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.501523 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkgf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-sv88h_openstack(ce644b68-c368-42dd-b834-d86cee38759d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.502767 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" podUID="ce644b68-c368-42dd-b834-d86cee38759d" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.608739 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.609057 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hg9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xvw84_openstack(f7dc33bd-09be-4aca-98c5-3d7b69dd0249): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.610373 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" podUID="f7dc33bd-09be-4aca-98c5-3d7b69dd0249" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.611914 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.612100 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtdmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-f2mtl_openstack(cf023b91-8240-4311-a8cb-185f4f0f75db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.613267 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" podUID="cf023b91-8240-4311-a8cb-185f4f0f75db" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.680987 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.681287 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn9cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-zs99j_openstack(6b467302-1643-4309-a17c-7d54d3307272): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.682636 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" podUID="6b467302-1643-4309-a17c-7d54d3307272" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.784411 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" podUID="f7dc33bd-09be-4aca-98c5-3d7b69dd0249" Jan 21 17:34:30 crc kubenswrapper[4823]: E0121 17:34:30.786066 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" podUID="6b467302-1643-4309-a17c-7d54d3307272" Jan 21 17:34:31 crc kubenswrapper[4823]: I0121 17:34:31.807334 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" event={"ID":"ce644b68-c368-42dd-b834-d86cee38759d","Type":"ContainerDied","Data":"27567ff42119ce363f11762f028daf9911ef0ca64f34cdb6128b6acc82e138c5"} Jan 21 17:34:31 crc kubenswrapper[4823]: I0121 17:34:31.807593 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27567ff42119ce363f11762f028daf9911ef0ca64f34cdb6128b6acc82e138c5" Jan 21 17:34:31 crc kubenswrapper[4823]: I0121 17:34:31.810324 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" event={"ID":"cf023b91-8240-4311-a8cb-185f4f0f75db","Type":"ContainerDied","Data":"2e79dec38d8e97100e0c0eb2b1e1a6fbad9e2317846a5ca59dddb38bd6bbbcab"} Jan 21 17:34:31 crc kubenswrapper[4823]: I0121 17:34:31.810349 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e79dec38d8e97100e0c0eb2b1e1a6fbad9e2317846a5ca59dddb38bd6bbbcab" Jan 21 17:34:31 crc kubenswrapper[4823]: I0121 17:34:31.959845 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:31 crc kubenswrapper[4823]: I0121 17:34:31.994120 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.128647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf023b91-8240-4311-a8cb-185f4f0f75db-config\") pod \"cf023b91-8240-4311-a8cb-185f4f0f75db\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.129053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkgf8\" (UniqueName: \"kubernetes.io/projected/ce644b68-c368-42dd-b834-d86cee38759d-kube-api-access-rkgf8\") pod \"ce644b68-c368-42dd-b834-d86cee38759d\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.129100 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-dns-svc\") pod \"ce644b68-c368-42dd-b834-d86cee38759d\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.129163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-config\") pod \"ce644b68-c368-42dd-b834-d86cee38759d\" (UID: \"ce644b68-c368-42dd-b834-d86cee38759d\") " Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.129187 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf023b91-8240-4311-a8cb-185f4f0f75db-config" (OuterVolumeSpecName: "config") pod "cf023b91-8240-4311-a8cb-185f4f0f75db" (UID: "cf023b91-8240-4311-a8cb-185f4f0f75db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.129223 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtdmd\" (UniqueName: \"kubernetes.io/projected/cf023b91-8240-4311-a8cb-185f4f0f75db-kube-api-access-qtdmd\") pod \"cf023b91-8240-4311-a8cb-185f4f0f75db\" (UID: \"cf023b91-8240-4311-a8cb-185f4f0f75db\") " Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.132498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce644b68-c368-42dd-b834-d86cee38759d" (UID: "ce644b68-c368-42dd-b834-d86cee38759d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.133493 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-config" (OuterVolumeSpecName: "config") pod "ce644b68-c368-42dd-b834-d86cee38759d" (UID: "ce644b68-c368-42dd-b834-d86cee38759d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.134388 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.134420 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce644b68-c368-42dd-b834-d86cee38759d-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.134432 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf023b91-8240-4311-a8cb-185f4f0f75db-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.140460 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf023b91-8240-4311-a8cb-185f4f0f75db-kube-api-access-qtdmd" (OuterVolumeSpecName: "kube-api-access-qtdmd") pod "cf023b91-8240-4311-a8cb-185f4f0f75db" (UID: "cf023b91-8240-4311-a8cb-185f4f0f75db"). InnerVolumeSpecName "kube-api-access-qtdmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.163200 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce644b68-c368-42dd-b834-d86cee38759d-kube-api-access-rkgf8" (OuterVolumeSpecName: "kube-api-access-rkgf8") pod "ce644b68-c368-42dd-b834-d86cee38759d" (UID: "ce644b68-c368-42dd-b834-d86cee38759d"). InnerVolumeSpecName "kube-api-access-rkgf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.236631 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtdmd\" (UniqueName: \"kubernetes.io/projected/cf023b91-8240-4311-a8cb-185f4f0f75db-kube-api-access-qtdmd\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.236685 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkgf8\" (UniqueName: \"kubernetes.io/projected/ce644b68-c368-42dd-b834-d86cee38759d-kube-api-access-rkgf8\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.275625 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.370898 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ht698"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.474485 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.481901 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.547137 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 17:34:32 crc kubenswrapper[4823]: W0121 17:34:32.556298 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c4434d_b98a_4bc8_8d26_bd9a2a6d98bf.slice/crio-b589807d679bb266dbf6c18da22fa9787c2003c639a365900127243ed6074b11 WatchSource:0}: Error finding container b589807d679bb266dbf6c18da22fa9787c2003c639a365900127243ed6074b11: Status 404 returned error can't find the container with id b589807d679bb266dbf6c18da22fa9787c2003c639a365900127243ed6074b11 Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.638698 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vcbxp"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.639838 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.645050 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.661166 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86467b0e-fd16-47e1-93d4-98ff2032c226-config\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.661528 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86467b0e-fd16-47e1-93d4-98ff2032c226-combined-ca-bundle\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.661610 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86467b0e-fd16-47e1-93d4-98ff2032c226-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.661668 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/86467b0e-fd16-47e1-93d4-98ff2032c226-ovs-rundir\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.661778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bnr\" (UniqueName: \"kubernetes.io/projected/86467b0e-fd16-47e1-93d4-98ff2032c226-kube-api-access-q9bnr\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.661812 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/86467b0e-fd16-47e1-93d4-98ff2032c226-ovn-rundir\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.670925 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.683909 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vcbxp"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.766999 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86467b0e-fd16-47e1-93d4-98ff2032c226-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767079 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/86467b0e-fd16-47e1-93d4-98ff2032c226-ovs-rundir\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767145 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9bnr\" (UniqueName: \"kubernetes.io/projected/86467b0e-fd16-47e1-93d4-98ff2032c226-kube-api-access-q9bnr\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/86467b0e-fd16-47e1-93d4-98ff2032c226-ovn-rundir\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767256 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86467b0e-fd16-47e1-93d4-98ff2032c226-config\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767296 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86467b0e-fd16-47e1-93d4-98ff2032c226-combined-ca-bundle\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767531 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/86467b0e-fd16-47e1-93d4-98ff2032c226-ovn-rundir\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.767535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/86467b0e-fd16-47e1-93d4-98ff2032c226-ovs-rundir\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.768482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86467b0e-fd16-47e1-93d4-98ff2032c226-config\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.776844 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86467b0e-fd16-47e1-93d4-98ff2032c226-combined-ca-bundle\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.777713 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86467b0e-fd16-47e1-93d4-98ff2032c226-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.806331 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bfdrb"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.833497 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d465f8f2-aad6-47d6-8887-ee38d8b846ac","Type":"ContainerStarted","Data":"778e0f12536a7e7298554e71ef1289337da3bcba4dacbccd07d9f30e617687c5"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.841272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"350b339b-c723-4ff3-ab95-83e82c6c4d52","Type":"ContainerStarted","Data":"9b8a13c86fbc0314d322db18f166b9a89f4a2bf075663d298c78bf5461f6b6b2"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.845660 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9bnr\" (UniqueName: \"kubernetes.io/projected/86467b0e-fd16-47e1-93d4-98ff2032c226-kube-api-access-q9bnr\") pod \"ovn-controller-metrics-vcbxp\" (UID: \"86467b0e-fd16-47e1-93d4-98ff2032c226\") " pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.861762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerStarted","Data":"36f62ad5e0ae2c734160fd87c21988ff446e8c242a03e27ee000ed2bf19521af"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.865316 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf","Type":"ContainerStarted","Data":"b589807d679bb266dbf6c18da22fa9787c2003c639a365900127243ed6074b11"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.868309 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zs99j"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.874477 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb" event={"ID":"9588aa19-b204-450e-a781-2b3d119bd86e","Type":"ContainerStarted","Data":"97efccbf0017d93d338e27fb85e2c50afab074579d4b5e6997159ec704d7a6d1"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.877188 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbb101cd-8034-422f-9016-d0baa0d9513b","Type":"ContainerStarted","Data":"081e51ee26bf0d832605b93c2e668ec0e3e313b1a9207491470503c2f78a83dd"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.886898 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sv88h" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.886977 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2mtl" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.886906 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht698" event={"ID":"80f6b9ff-4282-4118-8a46-acdae16c9a3d","Type":"ContainerStarted","Data":"c9422200ff16f809a9c978abf9b3be3d4695e590f690dc7aeb055922ce499cd7"} Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.899260 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.932248 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d6xkq"] Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.934203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.939910 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.957006 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d6xkq"] Jan 21 17:34:32 crc kubenswrapper[4823]: W0121 17:34:32.957569 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod111d0907_497e_401f_a017_76534940920e.slice/crio-766a84aaf8817d6b2dbd882c2f4b0d20611b2b3d17a089016a28575c73250c4b WatchSource:0}: Error finding container 766a84aaf8817d6b2dbd882c2f4b0d20611b2b3d17a089016a28575c73250c4b: Status 404 returned error can't find the container with id 766a84aaf8817d6b2dbd882c2f4b0d20611b2b3d17a089016a28575c73250c4b Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.974251 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vcbxp" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.985568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjp2\" (UniqueName: \"kubernetes.io/projected/62d8c4d5-179a-4a89-85cb-26f84465bf2b-kube-api-access-bmjp2\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.985665 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-config\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.985748 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:32 crc kubenswrapper[4823]: I0121 17:34:32.985832 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.026094 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2mtl"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.037843 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2mtl"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.076153 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.088362 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmjp2\" (UniqueName: \"kubernetes.io/projected/62d8c4d5-179a-4a89-85cb-26f84465bf2b-kube-api-access-bmjp2\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.088413 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-config\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.088451 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.088482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.089495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.090374 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.090562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-config\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.106040 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sv88h"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.106105 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sv88h"] Jan 21 17:34:33 crc kubenswrapper[4823]: W0121 17:34:33.114066 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2e95955_387e_4ec0_a5b4_25a41b7cf9c9.slice/crio-5c2d5e8b3abaa139dfcedbad9c4de3abc3306123e8b789be790971122564cbb5 WatchSource:0}: Error finding container 5c2d5e8b3abaa139dfcedbad9c4de3abc3306123e8b789be790971122564cbb5: Status 404 returned error can't find the container with id 5c2d5e8b3abaa139dfcedbad9c4de3abc3306123e8b789be790971122564cbb5 Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.124491 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmjp2\" (UniqueName: \"kubernetes.io/projected/62d8c4d5-179a-4a89-85cb-26f84465bf2b-kube-api-access-bmjp2\") pod \"dnsmasq-dns-6bc7876d45-d6xkq\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.276835 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvw84"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.313954 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-r4c8w"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.317264 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.319443 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.321331 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.422485 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-dns-svc\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.422726 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-config\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.422800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.423006 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhx5\" (UniqueName: \"kubernetes.io/projected/78df6398-7de0-4017-ac89-70c5fd48130d-kube-api-access-6mhx5\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.423030 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.434744 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce644b68-c368-42dd-b834-d86cee38759d" path="/var/lib/kubelet/pods/ce644b68-c368-42dd-b834-d86cee38759d/volumes" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.436205 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf023b91-8240-4311-a8cb-185f4f0f75db" path="/var/lib/kubelet/pods/cf023b91-8240-4311-a8cb-185f4f0f75db/volumes" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.437047 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-r4c8w"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.536504 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.536899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mhx5\" (UniqueName: \"kubernetes.io/projected/78df6398-7de0-4017-ac89-70c5fd48130d-kube-api-access-6mhx5\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.536931 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.537514 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.537808 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-dns-svc\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.538287 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-config\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.538500 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.544001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-config\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.544641 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-dns-svc\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.568595 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mhx5\" (UniqueName: \"kubernetes.io/projected/78df6398-7de0-4017-ac89-70c5fd48130d-kube-api-access-6mhx5\") pod \"dnsmasq-dns-8554648995-r4c8w\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.588546 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.669078 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.761962 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-config\") pod \"6b467302-1643-4309-a17c-7d54d3307272\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.762090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-dns-svc\") pod \"6b467302-1643-4309-a17c-7d54d3307272\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.762184 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn9cz\" (UniqueName: \"kubernetes.io/projected/6b467302-1643-4309-a17c-7d54d3307272-kube-api-access-zn9cz\") pod \"6b467302-1643-4309-a17c-7d54d3307272\" (UID: \"6b467302-1643-4309-a17c-7d54d3307272\") " Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.762555 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-config" (OuterVolumeSpecName: "config") pod "6b467302-1643-4309-a17c-7d54d3307272" (UID: "6b467302-1643-4309-a17c-7d54d3307272"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.763244 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b467302-1643-4309-a17c-7d54d3307272" (UID: "6b467302-1643-4309-a17c-7d54d3307272"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.786886 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b467302-1643-4309-a17c-7d54d3307272-kube-api-access-zn9cz" (OuterVolumeSpecName: "kube-api-access-zn9cz") pod "6b467302-1643-4309-a17c-7d54d3307272" (UID: "6b467302-1643-4309-a17c-7d54d3307272"). InnerVolumeSpecName "kube-api-access-zn9cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.794701 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vcbxp"] Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.864768 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.864814 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn9cz\" (UniqueName: \"kubernetes.io/projected/6b467302-1643-4309-a17c-7d54d3307272-kube-api-access-zn9cz\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.864879 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b467302-1643-4309-a17c-7d54d3307272-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.913932 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.913947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zs99j" event={"ID":"6b467302-1643-4309-a17c-7d54d3307272","Type":"ContainerDied","Data":"7944bd5baae4baeba6475ee4fb94e7903f531a745973f982bc3de4941baa40ef"} Jan 21 17:34:33 crc kubenswrapper[4823]: W0121 17:34:33.917500 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86467b0e_fd16_47e1_93d4_98ff2032c226.slice/crio-25597231a3617a680b00ccc9e1b7f8b6dafd535039e967e3effd544f3f8a3bf4 WatchSource:0}: Error finding container 25597231a3617a680b00ccc9e1b7f8b6dafd535039e967e3effd544f3f8a3bf4: Status 404 returned error can't find the container with id 25597231a3617a680b00ccc9e1b7f8b6dafd535039e967e3effd544f3f8a3bf4 Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.923283 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"619d3aad-c1a1-4d30-ac6f-a0b9535371dc","Type":"ContainerStarted","Data":"bf8050866cc839063b75d98e01383637e0e7b6575b76d4460b5d2bb06be62370"} Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.928188 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9","Type":"ContainerStarted","Data":"5c2d5e8b3abaa139dfcedbad9c4de3abc3306123e8b789be790971122564cbb5"} Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.932111 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dd8ea30-a041-4ce6-8a36-b8a355b076dc","Type":"ContainerStarted","Data":"72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07"} Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.935941 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"111d0907-497e-401f-a017-76534940920e","Type":"ContainerStarted","Data":"766a84aaf8817d6b2dbd882c2f4b0d20611b2b3d17a089016a28575c73250c4b"} Jan 21 17:34:33 crc kubenswrapper[4823]: I0121 17:34:33.991923 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.038667 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d6xkq"] Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.069301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-config\") pod \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.069376 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hg9f\" (UniqueName: \"kubernetes.io/projected/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-kube-api-access-8hg9f\") pod \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.069674 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-dns-svc\") pod \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\" (UID: \"f7dc33bd-09be-4aca-98c5-3d7b69dd0249\") " Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.070076 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-config" (OuterVolumeSpecName: "config") pod "f7dc33bd-09be-4aca-98c5-3d7b69dd0249" (UID: "f7dc33bd-09be-4aca-98c5-3d7b69dd0249"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.070138 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7dc33bd-09be-4aca-98c5-3d7b69dd0249" (UID: "f7dc33bd-09be-4aca-98c5-3d7b69dd0249"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.071234 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.071417 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.087243 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-kube-api-access-8hg9f" (OuterVolumeSpecName: "kube-api-access-8hg9f") pod "f7dc33bd-09be-4aca-98c5-3d7b69dd0249" (UID: "f7dc33bd-09be-4aca-98c5-3d7b69dd0249"). InnerVolumeSpecName "kube-api-access-8hg9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.104468 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zs99j"] Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.123104 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zs99j"] Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.173393 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hg9f\" (UniqueName: \"kubernetes.io/projected/f7dc33bd-09be-4aca-98c5-3d7b69dd0249-kube-api-access-8hg9f\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.730839 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-r4c8w"] Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.947953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" event={"ID":"f7dc33bd-09be-4aca-98c5-3d7b69dd0249","Type":"ContainerDied","Data":"7321d11bfcb1880b4c27a93b373d68de297d2f8abf1b4cf8a491698d6ec426e3"} Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.947985 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvw84" Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.956464 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" event={"ID":"62d8c4d5-179a-4a89-85cb-26f84465bf2b","Type":"ContainerStarted","Data":"3b96fdfceda77a869018cfecae67a712cb13820b0a2606fa787050fc18676c58"} Jan 21 17:34:34 crc kubenswrapper[4823]: I0121 17:34:34.957963 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vcbxp" event={"ID":"86467b0e-fd16-47e1-93d4-98ff2032c226","Type":"ContainerStarted","Data":"25597231a3617a680b00ccc9e1b7f8b6dafd535039e967e3effd544f3f8a3bf4"} Jan 21 17:34:35 crc kubenswrapper[4823]: I0121 17:34:35.019648 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvw84"] Jan 21 17:34:35 crc kubenswrapper[4823]: I0121 17:34:35.025219 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvw84"] Jan 21 17:34:35 crc kubenswrapper[4823]: I0121 17:34:35.358478 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b467302-1643-4309-a17c-7d54d3307272" path="/var/lib/kubelet/pods/6b467302-1643-4309-a17c-7d54d3307272/volumes" Jan 21 17:34:35 crc kubenswrapper[4823]: I0121 17:34:35.359357 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dc33bd-09be-4aca-98c5-3d7b69dd0249" path="/var/lib/kubelet/pods/f7dc33bd-09be-4aca-98c5-3d7b69dd0249/volumes" Jan 21 17:34:35 crc kubenswrapper[4823]: I0121 17:34:35.966468 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-r4c8w" event={"ID":"78df6398-7de0-4017-ac89-70c5fd48130d","Type":"ContainerStarted","Data":"2f39ed4e6c0e2a497a6f5250bdbb864a817c56782c0789cbab211914608ca8fe"} Jan 21 17:34:52 crc kubenswrapper[4823]: E0121 17:34:52.358720 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 17:34:52 crc kubenswrapper[4823]: E0121 17:34:52.359316 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 17:34:52 crc kubenswrapper[4823]: E0121 17:34:52.359508 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-756mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(d465f8f2-aad6-47d6-8887-ee38d8b846ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:34:52 crc kubenswrapper[4823]: E0121 17:34:52.360708 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.192071 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbb101cd-8034-422f-9016-d0baa0d9513b","Type":"ContainerStarted","Data":"7e1ad064a645401574fb9320f1b4d2719d879cc740f84e7b68fe59ae6e716211"} Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.195192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht698" event={"ID":"80f6b9ff-4282-4118-8a46-acdae16c9a3d","Type":"ContainerStarted","Data":"9e5c5ba6549f05244f7b00c15b7dbb50c3609e162155c4a56217c408a23e0a19"} Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.214526 4823 generic.go:334] "Generic (PLEG): container finished" podID="78df6398-7de0-4017-ac89-70c5fd48130d" containerID="d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447" exitCode=0 Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.214572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-r4c8w" event={"ID":"78df6398-7de0-4017-ac89-70c5fd48130d","Type":"ContainerDied","Data":"d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447"} Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.228052 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9","Type":"ContainerStarted","Data":"0337d70c8fb0272caca74036d80ae94c1765c2f53bd5466b154aee143a4a586c"} Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.243912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"111d0907-497e-401f-a017-76534940920e","Type":"ContainerStarted","Data":"bd8f6e9ed6f3a9305a06f3708d72683c39cf49a3413a6024c9b39560028c3d5d"} Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.244211 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.270471 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"350b339b-c723-4ff3-ab95-83e82c6c4d52","Type":"ContainerStarted","Data":"7e9ecfd7445a29a196ce1a942740e70786faa28481b8cc3258fa4d598f791bd6"} Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.299847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf","Type":"ContainerStarted","Data":"22333c61b82050d780c73fb452f18c94193d28023b041704ba6c25dd7dc504fc"} Jan 21 17:34:53 crc kubenswrapper[4823]: E0121 17:34:53.302081 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.321941 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.700044993 podStartE2EDuration="39.321918052s" podCreationTimestamp="2026-01-21 17:34:14 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.997339717 +0000 UTC m=+1073.923470577" lastFinishedPulling="2026-01-21 17:34:43.619212776 +0000 UTC m=+1084.545343636" observedRunningTime="2026-01-21 17:34:53.297994551 +0000 UTC m=+1094.224125411" watchObservedRunningTime="2026-01-21 17:34:53.321918052 +0000 UTC m=+1094.248048912" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.350151 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vcbxp" podStartSLOduration=2.943484977 podStartE2EDuration="21.35012625s" podCreationTimestamp="2026-01-21 17:34:32 +0000 UTC" firstStartedPulling="2026-01-21 17:34:33.957486158 +0000 UTC m=+1074.883617018" lastFinishedPulling="2026-01-21 17:34:52.364127431 +0000 UTC m=+1093.290258291" observedRunningTime="2026-01-21 17:34:53.346043879 +0000 UTC m=+1094.272174739" watchObservedRunningTime="2026-01-21 17:34:53.35012625 +0000 UTC m=+1094.276257120" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.421228 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.384928451 podStartE2EDuration="31.421210827s" podCreationTimestamp="2026-01-21 17:34:22 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.574052086 +0000 UTC m=+1073.500182946" lastFinishedPulling="2026-01-21 17:34:44.610334442 +0000 UTC m=+1085.536465322" observedRunningTime="2026-01-21 17:34:53.409942478 +0000 UTC m=+1094.336073338" watchObservedRunningTime="2026-01-21 17:34:53.421210827 +0000 UTC m=+1094.347341687" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.464388 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:53 crc kubenswrapper[4823]: I0121 17:34:53.464442 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.620924 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vcbxp" event={"ID":"86467b0e-fd16-47e1-93d4-98ff2032c226","Type":"ContainerStarted","Data":"9247953909a4f972a4a9cc581ad03e3a358aafe63a7e2648d24f19f8a1e8a859"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.628444 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf","Type":"ContainerStarted","Data":"5385ff82ea61948e1b03efe93fb4492a763b2892936447684e7da4c0ee499bfb"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.630693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb" event={"ID":"9588aa19-b204-450e-a781-2b3d119bd86e","Type":"ContainerStarted","Data":"361e51cda9bca062ccb7c35c40d2157817f1e3d70e67d613995d66c9d4453af3"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.631121 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bfdrb" Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.636240 4823 generic.go:334] "Generic (PLEG): container finished" podID="80f6b9ff-4282-4118-8a46-acdae16c9a3d" containerID="9e5c5ba6549f05244f7b00c15b7dbb50c3609e162155c4a56217c408a23e0a19" exitCode=0 Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.636369 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht698" event={"ID":"80f6b9ff-4282-4118-8a46-acdae16c9a3d","Type":"ContainerDied","Data":"9e5c5ba6549f05244f7b00c15b7dbb50c3609e162155c4a56217c408a23e0a19"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.643038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-r4c8w" event={"ID":"78df6398-7de0-4017-ac89-70c5fd48130d","Type":"ContainerStarted","Data":"69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.643279 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.655458 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2e95955-387e-4ec0-a5b4-25a41b7cf9c9","Type":"ContainerStarted","Data":"94d6465d403228a2d3efe97eca0e4cdbf19c469bc4e2a2c193abcc7ba38766ff"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.661014 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bfdrb" podStartSLOduration=16.482754274 podStartE2EDuration="35.660987868s" podCreationTimestamp="2026-01-21 17:34:19 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.81779594 +0000 UTC m=+1073.743926810" lastFinishedPulling="2026-01-21 17:34:51.996029544 +0000 UTC m=+1092.922160404" observedRunningTime="2026-01-21 17:34:54.657653416 +0000 UTC m=+1095.583784276" watchObservedRunningTime="2026-01-21 17:34:54.660987868 +0000 UTC m=+1095.587118728" Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.669470 4823 generic.go:334] "Generic (PLEG): container finished" podID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerID="716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47" exitCode=0 Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.670485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" event={"ID":"62d8c4d5-179a-4a89-85cb-26f84465bf2b","Type":"ContainerDied","Data":"716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47"} Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.714887 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-r4c8w" podStartSLOduration=5.417164608 podStartE2EDuration="21.714842659s" podCreationTimestamp="2026-01-21 17:34:33 +0000 UTC" firstStartedPulling="2026-01-21 17:34:35.249660563 +0000 UTC m=+1076.175791423" lastFinishedPulling="2026-01-21 17:34:51.547338604 +0000 UTC m=+1092.473469474" observedRunningTime="2026-01-21 17:34:54.704396391 +0000 UTC m=+1095.630527251" watchObservedRunningTime="2026-01-21 17:34:54.714842659 +0000 UTC m=+1095.640973519" Jan 21 17:34:54 crc kubenswrapper[4823]: I0121 17:34:54.734557 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.638431063 podStartE2EDuration="32.734535046s" podCreationTimestamp="2026-01-21 17:34:22 +0000 UTC" firstStartedPulling="2026-01-21 17:34:33.118973374 +0000 UTC m=+1074.045104244" lastFinishedPulling="2026-01-21 17:34:49.215077357 +0000 UTC m=+1090.141208227" observedRunningTime="2026-01-21 17:34:54.732929796 +0000 UTC m=+1095.659060666" watchObservedRunningTime="2026-01-21 17:34:54.734535046 +0000 UTC m=+1095.660665906" Jan 21 17:34:55 crc kubenswrapper[4823]: I0121 17:34:55.678779 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht698" event={"ID":"80f6b9ff-4282-4118-8a46-acdae16c9a3d","Type":"ContainerStarted","Data":"08b573b8e163419c03d9b9417ed8ea0ff3cbac649770bb79ddeb04c007c72dab"} Jan 21 17:34:55 crc kubenswrapper[4823]: I0121 17:34:55.679431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht698" event={"ID":"80f6b9ff-4282-4118-8a46-acdae16c9a3d","Type":"ContainerStarted","Data":"424e28d1ff450ec6b8f41f5cccfe3780794a67edf110b3c6810ce6d2b4db774d"} Jan 21 17:34:55 crc kubenswrapper[4823]: I0121 17:34:55.682399 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" event={"ID":"62d8c4d5-179a-4a89-85cb-26f84465bf2b","Type":"ContainerStarted","Data":"2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580"} Jan 21 17:34:55 crc kubenswrapper[4823]: I0121 17:34:55.683804 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:55 crc kubenswrapper[4823]: I0121 17:34:55.846598 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" podStartSLOduration=9.201061109 podStartE2EDuration="23.84656367s" podCreationTimestamp="2026-01-21 17:34:32 +0000 UTC" firstStartedPulling="2026-01-21 17:34:34.56853222 +0000 UTC m=+1075.494663080" lastFinishedPulling="2026-01-21 17:34:49.214034751 +0000 UTC m=+1090.140165641" observedRunningTime="2026-01-21 17:34:55.835872416 +0000 UTC m=+1096.762003296" watchObservedRunningTime="2026-01-21 17:34:55.84656367 +0000 UTC m=+1096.772694530" Jan 21 17:34:56 crc kubenswrapper[4823]: I0121 17:34:56.505955 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:56 crc kubenswrapper[4823]: I0121 17:34:56.714512 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ht698" podStartSLOduration=25.469830705 podStartE2EDuration="37.714482721s" podCreationTimestamp="2026-01-21 17:34:19 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.365657105 +0000 UTC m=+1073.291787965" lastFinishedPulling="2026-01-21 17:34:44.610309101 +0000 UTC m=+1085.536439981" observedRunningTime="2026-01-21 17:34:56.708173175 +0000 UTC m=+1097.634304035" watchObservedRunningTime="2026-01-21 17:34:56.714482721 +0000 UTC m=+1097.640613591" Jan 21 17:34:57 crc kubenswrapper[4823]: I0121 17:34:57.288219 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:57 crc kubenswrapper[4823]: I0121 17:34:57.331890 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:57 crc kubenswrapper[4823]: I0121 17:34:57.705386 4823 generic.go:334] "Generic (PLEG): container finished" podID="350b339b-c723-4ff3-ab95-83e82c6c4d52" containerID="7e9ecfd7445a29a196ce1a942740e70786faa28481b8cc3258fa4d598f791bd6" exitCode=0 Jan 21 17:34:57 crc kubenswrapper[4823]: I0121 17:34:57.705500 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"350b339b-c723-4ff3-ab95-83e82c6c4d52","Type":"ContainerDied","Data":"7e9ecfd7445a29a196ce1a942740e70786faa28481b8cc3258fa4d598f791bd6"} Jan 21 17:34:57 crc kubenswrapper[4823]: I0121 17:34:57.705966 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:57 crc kubenswrapper[4823]: I0121 17:34:57.767433 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.513339 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.671047 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.690793 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.692345 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.695833 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.696010 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.696294 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.696476 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dzs9s" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.717378 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.717551 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"350b339b-c723-4ff3-ab95-83e82c6c4d52","Type":"ContainerStarted","Data":"511bf161ba08d5364215c3046402b5bd6f66547c85d3fa84faab5cc3f3ae6767"} Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.719735 4823 generic.go:334] "Generic (PLEG): container finished" podID="dbb101cd-8034-422f-9016-d0baa0d9513b" containerID="7e1ad064a645401574fb9320f1b4d2719d879cc740f84e7b68fe59ae6e716211" exitCode=0 Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.720817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbb101cd-8034-422f-9016-d0baa0d9513b","Type":"ContainerDied","Data":"7e1ad064a645401574fb9320f1b4d2719d879cc740f84e7b68fe59ae6e716211"} Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.797084 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d6xkq"] Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.799338 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerName="dnsmasq-dns" containerID="cri-o://2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580" gracePeriod=10 Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.845514 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.112454194 podStartE2EDuration="46.845492219s" podCreationTimestamp="2026-01-21 17:34:12 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.482212536 +0000 UTC m=+1073.408343396" lastFinishedPulling="2026-01-21 17:34:49.215250521 +0000 UTC m=+1090.141381421" observedRunningTime="2026-01-21 17:34:58.840218519 +0000 UTC m=+1099.766349389" watchObservedRunningTime="2026-01-21 17:34:58.845492219 +0000 UTC m=+1099.771623089" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866154 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866260 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecf31a3-66b0-40d6-8eab-93050f79c68a-config\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866505 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866682 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ecf31a3-66b0-40d6-8eab-93050f79c68a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866781 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecf31a3-66b0-40d6-8eab-93050f79c68a-scripts\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.866870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzwk\" (UniqueName: \"kubernetes.io/projected/9ecf31a3-66b0-40d6-8eab-93050f79c68a-kube-api-access-6tzwk\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968618 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ecf31a3-66b0-40d6-8eab-93050f79c68a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecf31a3-66b0-40d6-8eab-93050f79c68a-scripts\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968675 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tzwk\" (UniqueName: \"kubernetes.io/projected/9ecf31a3-66b0-40d6-8eab-93050f79c68a-kube-api-access-6tzwk\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968727 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968746 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.968788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecf31a3-66b0-40d6-8eab-93050f79c68a-config\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.971920 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ecf31a3-66b0-40d6-8eab-93050f79c68a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.971937 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ecf31a3-66b0-40d6-8eab-93050f79c68a-config\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.972561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecf31a3-66b0-40d6-8eab-93050f79c68a-scripts\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.973570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.974714 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.976767 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecf31a3-66b0-40d6-8eab-93050f79c68a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:58 crc kubenswrapper[4823]: I0121 17:34:58.986087 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tzwk\" (UniqueName: \"kubernetes.io/projected/9ecf31a3-66b0-40d6-8eab-93050f79c68a-kube-api-access-6tzwk\") pod \"ovn-northd-0\" (UID: \"9ecf31a3-66b0-40d6-8eab-93050f79c68a\") " pod="openstack/ovn-northd-0" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.014553 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.282733 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.376466 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-dns-svc\") pod \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.376590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-ovsdbserver-sb\") pod \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.376647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-config\") pod \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.376723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmjp2\" (UniqueName: \"kubernetes.io/projected/62d8c4d5-179a-4a89-85cb-26f84465bf2b-kube-api-access-bmjp2\") pod \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\" (UID: \"62d8c4d5-179a-4a89-85cb-26f84465bf2b\") " Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.385423 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d8c4d5-179a-4a89-85cb-26f84465bf2b-kube-api-access-bmjp2" (OuterVolumeSpecName: "kube-api-access-bmjp2") pod "62d8c4d5-179a-4a89-85cb-26f84465bf2b" (UID: "62d8c4d5-179a-4a89-85cb-26f84465bf2b"). InnerVolumeSpecName "kube-api-access-bmjp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.430370 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62d8c4d5-179a-4a89-85cb-26f84465bf2b" (UID: "62d8c4d5-179a-4a89-85cb-26f84465bf2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.433153 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62d8c4d5-179a-4a89-85cb-26f84465bf2b" (UID: "62d8c4d5-179a-4a89-85cb-26f84465bf2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.435438 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-config" (OuterVolumeSpecName: "config") pod "62d8c4d5-179a-4a89-85cb-26f84465bf2b" (UID: "62d8c4d5-179a-4a89-85cb-26f84465bf2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.478963 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.478993 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.479004 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d8c4d5-179a-4a89-85cb-26f84465bf2b-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.479014 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmjp2\" (UniqueName: \"kubernetes.io/projected/62d8c4d5-179a-4a89-85cb-26f84465bf2b-kube-api-access-bmjp2\") on node \"crc\" DevicePath \"\"" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.520504 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 17:34:59 crc kubenswrapper[4823]: W0121 17:34:59.523767 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecf31a3_66b0_40d6_8eab_93050f79c68a.slice/crio-ea193f9157ccb79051a026b011d66ea2a616f604279c100155b71401276c20ba WatchSource:0}: Error finding container ea193f9157ccb79051a026b011d66ea2a616f604279c100155b71401276c20ba: Status 404 returned error can't find the container with id ea193f9157ccb79051a026b011d66ea2a616f604279c100155b71401276c20ba Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.728909 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.728935 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" event={"ID":"62d8c4d5-179a-4a89-85cb-26f84465bf2b","Type":"ContainerDied","Data":"2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580"} Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.728981 4823 scope.go:117] "RemoveContainer" containerID="2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.728846 4823 generic.go:334] "Generic (PLEG): container finished" podID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerID="2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580" exitCode=0 Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.729125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d6xkq" event={"ID":"62d8c4d5-179a-4a89-85cb-26f84465bf2b","Type":"ContainerDied","Data":"3b96fdfceda77a869018cfecae67a712cb13820b0a2606fa787050fc18676c58"} Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.740165 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbb101cd-8034-422f-9016-d0baa0d9513b","Type":"ContainerStarted","Data":"0cdd5757a762e08ab7de627fcfe24ddb41a585a9ea9c28524e2c7660574bb81e"} Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.742739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9ecf31a3-66b0-40d6-8eab-93050f79c68a","Type":"ContainerStarted","Data":"ea193f9157ccb79051a026b011d66ea2a616f604279c100155b71401276c20ba"} Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.755779 4823 scope.go:117] "RemoveContainer" containerID="716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.768650 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.641552965 podStartE2EDuration="46.768625885s" podCreationTimestamp="2026-01-21 17:34:13 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.483345224 +0000 UTC m=+1073.409476084" lastFinishedPulling="2026-01-21 17:34:44.610418144 +0000 UTC m=+1085.536549004" observedRunningTime="2026-01-21 17:34:59.767606139 +0000 UTC m=+1100.693737019" watchObservedRunningTime="2026-01-21 17:34:59.768625885 +0000 UTC m=+1100.694756745" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.778068 4823 scope.go:117] "RemoveContainer" containerID="2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580" Jan 21 17:34:59 crc kubenswrapper[4823]: E0121 17:34:59.783468 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580\": container with ID starting with 2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580 not found: ID does not exist" containerID="2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.783523 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580"} err="failed to get container status \"2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580\": rpc error: code = NotFound desc = could not find container \"2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580\": container with ID starting with 2c837cb1b5fe7231d7d05fe0ffef1fca89c738ae359ad33e3436e1fa3a3ea580 not found: ID does not exist" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.783552 4823 scope.go:117] "RemoveContainer" containerID="716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47" Jan 21 17:34:59 crc kubenswrapper[4823]: E0121 17:34:59.784226 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47\": container with ID starting with 716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47 not found: ID does not exist" containerID="716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.784245 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47"} err="failed to get container status \"716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47\": rpc error: code = NotFound desc = could not find container \"716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47\": container with ID starting with 716ffd7114ad545e1e1595af23476c91d542cc7937161cc9268d6c44a3f8bc47 not found: ID does not exist" Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.792362 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d6xkq"] Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.798661 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d6xkq"] Jan 21 17:34:59 crc kubenswrapper[4823]: I0121 17:34:59.799443 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 17:35:00 crc kubenswrapper[4823]: I0121 17:35:00.115485 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:35:00 crc kubenswrapper[4823]: I0121 17:35:00.115560 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:35:00 crc kubenswrapper[4823]: I0121 17:35:00.750565 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9ecf31a3-66b0-40d6-8eab-93050f79c68a","Type":"ContainerStarted","Data":"79d23048e3ee2c6e59bf5d8efa2166aa815c510ce14b8e62aa051fc6ea2e7f96"} Jan 21 17:35:01 crc kubenswrapper[4823]: I0121 17:35:01.357442 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" path="/var/lib/kubelet/pods/62d8c4d5-179a-4a89-85cb-26f84465bf2b/volumes" Jan 21 17:35:01 crc kubenswrapper[4823]: I0121 17:35:01.760426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9ecf31a3-66b0-40d6-8eab-93050f79c68a","Type":"ContainerStarted","Data":"c8174e8d6c5ecf00cf636e40403c425feecd9a9e44a062838370f3d335988c8e"} Jan 21 17:35:01 crc kubenswrapper[4823]: I0121 17:35:01.760958 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 17:35:01 crc kubenswrapper[4823]: I0121 17:35:01.783620 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.866724774 podStartE2EDuration="3.783598454s" podCreationTimestamp="2026-01-21 17:34:58 +0000 UTC" firstStartedPulling="2026-01-21 17:34:59.527083665 +0000 UTC m=+1100.453214525" lastFinishedPulling="2026-01-21 17:35:00.443957355 +0000 UTC m=+1101.370088205" observedRunningTime="2026-01-21 17:35:01.779114123 +0000 UTC m=+1102.705244983" watchObservedRunningTime="2026-01-21 17:35:01.783598454 +0000 UTC m=+1102.709729324" Jan 21 17:35:03 crc kubenswrapper[4823]: I0121 17:35:03.518276 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 17:35:03 crc kubenswrapper[4823]: I0121 17:35:03.518563 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 17:35:03 crc kubenswrapper[4823]: I0121 17:35:03.622771 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 17:35:03 crc kubenswrapper[4823]: I0121 17:35:03.892696 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.502391 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8wgkf"] Jan 21 17:35:04 crc kubenswrapper[4823]: E0121 17:35:04.503127 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerName="init" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.503150 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerName="init" Jan 21 17:35:04 crc kubenswrapper[4823]: E0121 17:35:04.503175 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerName="dnsmasq-dns" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.503182 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerName="dnsmasq-dns" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.503332 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d8c4d5-179a-4a89-85cb-26f84465bf2b" containerName="dnsmasq-dns" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.503928 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.512274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8wgkf"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.574356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fea18d0-be48-4202-b830-80f527487892-operator-scripts\") pod \"keystone-db-create-8wgkf\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.574483 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvx6d\" (UniqueName: \"kubernetes.io/projected/5fea18d0-be48-4202-b830-80f527487892-kube-api-access-mvx6d\") pod \"keystone-db-create-8wgkf\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.618371 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8070-account-create-update-mpj6v"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.619467 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.622215 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.631557 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8070-account-create-update-mpj6v"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.676413 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvx6d\" (UniqueName: \"kubernetes.io/projected/5fea18d0-be48-4202-b830-80f527487892-kube-api-access-mvx6d\") pod \"keystone-db-create-8wgkf\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.676515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-operator-scripts\") pod \"keystone-8070-account-create-update-mpj6v\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.676586 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5k7\" (UniqueName: \"kubernetes.io/projected/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-kube-api-access-vf5k7\") pod \"keystone-8070-account-create-update-mpj6v\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.676640 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fea18d0-be48-4202-b830-80f527487892-operator-scripts\") pod \"keystone-db-create-8wgkf\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.678443 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fea18d0-be48-4202-b830-80f527487892-operator-scripts\") pod \"keystone-db-create-8wgkf\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.698965 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5jn7l"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.703135 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.704140 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvx6d\" (UniqueName: \"kubernetes.io/projected/5fea18d0-be48-4202-b830-80f527487892-kube-api-access-mvx6d\") pod \"keystone-db-create-8wgkf\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.715137 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5jn7l"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.778684 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsxt\" (UniqueName: \"kubernetes.io/projected/e8417175-6b47-4fd9-961c-e422480b3353-kube-api-access-tnsxt\") pod \"placement-db-create-5jn7l\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.779386 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-operator-scripts\") pod \"keystone-8070-account-create-update-mpj6v\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.779434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8417175-6b47-4fd9-961c-e422480b3353-operator-scripts\") pod \"placement-db-create-5jn7l\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.779465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5k7\" (UniqueName: \"kubernetes.io/projected/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-kube-api-access-vf5k7\") pod \"keystone-8070-account-create-update-mpj6v\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.780296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-operator-scripts\") pod \"keystone-8070-account-create-update-mpj6v\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.804306 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5k7\" (UniqueName: \"kubernetes.io/projected/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-kube-api-access-vf5k7\") pod \"keystone-8070-account-create-update-mpj6v\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.806551 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c720-account-create-update-ph4fq"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.808518 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.815843 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c720-account-create-update-ph4fq"] Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.816310 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.834529 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.881637 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8417175-6b47-4fd9-961c-e422480b3353-operator-scripts\") pod \"placement-db-create-5jn7l\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.881770 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsxt\" (UniqueName: \"kubernetes.io/projected/e8417175-6b47-4fd9-961c-e422480b3353-kube-api-access-tnsxt\") pod \"placement-db-create-5jn7l\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.881830 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqh7\" (UniqueName: \"kubernetes.io/projected/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-kube-api-access-txqh7\") pod \"placement-c720-account-create-update-ph4fq\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.881945 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-operator-scripts\") pod \"placement-c720-account-create-update-ph4fq\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.883321 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8417175-6b47-4fd9-961c-e422480b3353-operator-scripts\") pod \"placement-db-create-5jn7l\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.905872 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsxt\" (UniqueName: \"kubernetes.io/projected/e8417175-6b47-4fd9-961c-e422480b3353-kube-api-access-tnsxt\") pod \"placement-db-create-5jn7l\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.944989 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.983992 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-operator-scripts\") pod \"placement-c720-account-create-update-ph4fq\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.984441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqh7\" (UniqueName: \"kubernetes.io/projected/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-kube-api-access-txqh7\") pod \"placement-c720-account-create-update-ph4fq\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:04 crc kubenswrapper[4823]: I0121 17:35:04.986270 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-operator-scripts\") pod \"placement-c720-account-create-update-ph4fq\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.005363 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqh7\" (UniqueName: \"kubernetes.io/projected/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-kube-api-access-txqh7\") pod \"placement-c720-account-create-update-ph4fq\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.049346 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.049405 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.057980 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.156439 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.278091 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8070-account-create-update-mpj6v"] Jan 21 17:35:05 crc kubenswrapper[4823]: W0121 17:35:05.290401 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6a94bd0_3d15_4c8d_8974_af4f7772cc74.slice/crio-3a7057bbc00ccd56e25819bc6e84d4a8328afd149d771eb105b4477878b7c011 WatchSource:0}: Error finding container 3a7057bbc00ccd56e25819bc6e84d4a8328afd149d771eb105b4477878b7c011: Status 404 returned error can't find the container with id 3a7057bbc00ccd56e25819bc6e84d4a8328afd149d771eb105b4477878b7c011 Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.329779 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8wgkf"] Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.798775 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c720-account-create-update-ph4fq"] Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.836135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8070-account-create-update-mpj6v" event={"ID":"e6a94bd0-3d15-4c8d-8974-af4f7772cc74","Type":"ContainerStarted","Data":"b9189767fdf8874b86bb446add4d21e2113dedbdffe8c0d0c4d6cdc1cdefb7e8"} Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.836211 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8070-account-create-update-mpj6v" event={"ID":"e6a94bd0-3d15-4c8d-8974-af4f7772cc74","Type":"ContainerStarted","Data":"3a7057bbc00ccd56e25819bc6e84d4a8328afd149d771eb105b4477878b7c011"} Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.842465 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8wgkf" event={"ID":"5fea18d0-be48-4202-b830-80f527487892","Type":"ContainerStarted","Data":"7276526551e6347d3ed284b69f3f018bd6188f5a860bbe0792cad5b623e1522a"} Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.842522 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8wgkf" event={"ID":"5fea18d0-be48-4202-b830-80f527487892","Type":"ContainerStarted","Data":"001ef7fe6c4fe6c821991ac5afe4645938d15475036a7dbf69f2d9d55160e601"} Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.845212 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c720-account-create-update-ph4fq" event={"ID":"c67c073d-2bdd-4bcc-a73e-9eed04a74f17","Type":"ContainerStarted","Data":"d18a7b31fe6fe60ac532875f7547a8eed2593a06324d9a9962215f9fe1d2bfb5"} Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.846827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d465f8f2-aad6-47d6-8887-ee38d8b846ac","Type":"ContainerStarted","Data":"471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9"} Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.847479 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.862514 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8070-account-create-update-mpj6v" podStartSLOduration=1.862495861 podStartE2EDuration="1.862495861s" podCreationTimestamp="2026-01-21 17:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:05.858105382 +0000 UTC m=+1106.784236252" watchObservedRunningTime="2026-01-21 17:35:05.862495861 +0000 UTC m=+1106.788626711" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.885303 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5jn7l"] Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.892511 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-8wgkf" podStartSLOduration=1.892490902 podStartE2EDuration="1.892490902s" podCreationTimestamp="2026-01-21 17:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:05.880890075 +0000 UTC m=+1106.807020945" watchObservedRunningTime="2026-01-21 17:35:05.892490902 +0000 UTC m=+1106.818621762" Jan 21 17:35:05 crc kubenswrapper[4823]: I0121 17:35:05.906115 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.894187171 podStartE2EDuration="49.906091188s" podCreationTimestamp="2026-01-21 17:34:16 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.773051274 +0000 UTC m=+1073.699182134" lastFinishedPulling="2026-01-21 17:35:04.784955291 +0000 UTC m=+1105.711086151" observedRunningTime="2026-01-21 17:35:05.898624973 +0000 UTC m=+1106.824755843" watchObservedRunningTime="2026-01-21 17:35:05.906091188 +0000 UTC m=+1106.832222058" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.019621 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.098935 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.808261 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4tqlq"] Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.811904 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.836945 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4tqlq"] Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.876751 4823 generic.go:334] "Generic (PLEG): container finished" podID="e8417175-6b47-4fd9-961c-e422480b3353" containerID="b2d00b434edff5ceee405bfe6e0b2b9eed8c1bd40e67aafe523cd0a828e78761" exitCode=0 Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.877389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5jn7l" event={"ID":"e8417175-6b47-4fd9-961c-e422480b3353","Type":"ContainerDied","Data":"b2d00b434edff5ceee405bfe6e0b2b9eed8c1bd40e67aafe523cd0a828e78761"} Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.877433 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5jn7l" event={"ID":"e8417175-6b47-4fd9-961c-e422480b3353","Type":"ContainerStarted","Data":"f50334cb1b8f384dcfe7d7288d78f1c0a70428afd95665db6838b9002b87a1fa"} Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.882665 4823 generic.go:334] "Generic (PLEG): container finished" podID="e6a94bd0-3d15-4c8d-8974-af4f7772cc74" containerID="b9189767fdf8874b86bb446add4d21e2113dedbdffe8c0d0c4d6cdc1cdefb7e8" exitCode=0 Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.882733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8070-account-create-update-mpj6v" event={"ID":"e6a94bd0-3d15-4c8d-8974-af4f7772cc74","Type":"ContainerDied","Data":"b9189767fdf8874b86bb446add4d21e2113dedbdffe8c0d0c4d6cdc1cdefb7e8"} Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.897152 4823 generic.go:334] "Generic (PLEG): container finished" podID="5fea18d0-be48-4202-b830-80f527487892" containerID="7276526551e6347d3ed284b69f3f018bd6188f5a860bbe0792cad5b623e1522a" exitCode=0 Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.897205 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8wgkf" event={"ID":"5fea18d0-be48-4202-b830-80f527487892","Type":"ContainerDied","Data":"7276526551e6347d3ed284b69f3f018bd6188f5a860bbe0792cad5b623e1522a"} Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.901435 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-q27fp"] Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.903238 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.907228 4823 generic.go:334] "Generic (PLEG): container finished" podID="c67c073d-2bdd-4bcc-a73e-9eed04a74f17" containerID="746440c09887bb5adec32d9050cd855d3d5110a1f1276d2da0add51219afb108" exitCode=0 Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.907709 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c720-account-create-update-ph4fq" event={"ID":"c67c073d-2bdd-4bcc-a73e-9eed04a74f17","Type":"ContainerDied","Data":"746440c09887bb5adec32d9050cd855d3d5110a1f1276d2da0add51219afb108"} Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.924977 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjp5w\" (UniqueName: \"kubernetes.io/projected/00ca24fe-07a5-4b20-a18d-5f62c510030a-kube-api-access-bjp5w\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.925265 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-config\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.925484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.925617 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.925804 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:06 crc kubenswrapper[4823]: I0121 17:35:06.946612 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-q27fp"] Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.014184 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-e2a3-account-create-update-mx6xz"] Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.015438 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.018550 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.021175 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-e2a3-account-create-update-mx6xz"] Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038166 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpd7\" (UniqueName: \"kubernetes.io/projected/2a901c8f-bafb-433a-8de8-dd20b97f927d-kube-api-access-5xpd7\") pod \"watcher-db-create-q27fp\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjp5w\" (UniqueName: \"kubernetes.io/projected/00ca24fe-07a5-4b20-a18d-5f62c510030a-kube-api-access-bjp5w\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038349 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-config\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038391 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a901c8f-bafb-433a-8de8-dd20b97f927d-operator-scripts\") pod \"watcher-db-create-q27fp\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038443 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.038502 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.039710 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.040709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.042552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.043146 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-config\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.068246 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjp5w\" (UniqueName: \"kubernetes.io/projected/00ca24fe-07a5-4b20-a18d-5f62c510030a-kube-api-access-bjp5w\") pod \"dnsmasq-dns-b8fbc5445-4tqlq\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.138455 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.139753 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpd7\" (UniqueName: \"kubernetes.io/projected/2a901c8f-bafb-433a-8de8-dd20b97f927d-kube-api-access-5xpd7\") pod \"watcher-db-create-q27fp\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.139809 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a901c8f-bafb-433a-8de8-dd20b97f927d-operator-scripts\") pod \"watcher-db-create-q27fp\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.139902 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr24s\" (UniqueName: \"kubernetes.io/projected/644c6573-fd38-482c-8cef-409affff3581-kube-api-access-fr24s\") pod \"watcher-e2a3-account-create-update-mx6xz\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.139937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644c6573-fd38-482c-8cef-409affff3581-operator-scripts\") pod \"watcher-e2a3-account-create-update-mx6xz\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.140843 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a901c8f-bafb-433a-8de8-dd20b97f927d-operator-scripts\") pod \"watcher-db-create-q27fp\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.157625 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpd7\" (UniqueName: \"kubernetes.io/projected/2a901c8f-bafb-433a-8de8-dd20b97f927d-kube-api-access-5xpd7\") pod \"watcher-db-create-q27fp\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.223735 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.242303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr24s\" (UniqueName: \"kubernetes.io/projected/644c6573-fd38-482c-8cef-409affff3581-kube-api-access-fr24s\") pod \"watcher-e2a3-account-create-update-mx6xz\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.242388 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644c6573-fd38-482c-8cef-409affff3581-operator-scripts\") pod \"watcher-e2a3-account-create-update-mx6xz\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.243914 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644c6573-fd38-482c-8cef-409affff3581-operator-scripts\") pod \"watcher-e2a3-account-create-update-mx6xz\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.264465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr24s\" (UniqueName: \"kubernetes.io/projected/644c6573-fd38-482c-8cef-409affff3581-kube-api-access-fr24s\") pod \"watcher-e2a3-account-create-update-mx6xz\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.339171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.684370 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4tqlq"] Jan 21 17:35:07 crc kubenswrapper[4823]: W0121 17:35:07.687368 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00ca24fe_07a5_4b20_a18d_5f62c510030a.slice/crio-9158637442d98d07bfddce95b8690e5fc9d3479435c146aec1b9fc70aaf6ef27 WatchSource:0}: Error finding container 9158637442d98d07bfddce95b8690e5fc9d3479435c146aec1b9fc70aaf6ef27: Status 404 returned error can't find the container with id 9158637442d98d07bfddce95b8690e5fc9d3479435c146aec1b9fc70aaf6ef27 Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.878660 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-q27fp"] Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.921422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-q27fp" event={"ID":"2a901c8f-bafb-433a-8de8-dd20b97f927d","Type":"ContainerStarted","Data":"07c32468d16b7431a9da2aad0e6d101ea915e4e759848e9abe6df93260ac69b5"} Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.925089 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" event={"ID":"00ca24fe-07a5-4b20-a18d-5f62c510030a","Type":"ContainerStarted","Data":"1c512bc6b034e7740f216daf16223dd5d15e25b5efd7381b139594befcae59d4"} Jan 21 17:35:07 crc kubenswrapper[4823]: I0121 17:35:07.925199 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" event={"ID":"00ca24fe-07a5-4b20-a18d-5f62c510030a","Type":"ContainerStarted","Data":"9158637442d98d07bfddce95b8690e5fc9d3479435c146aec1b9fc70aaf6ef27"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:07.985978 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-e2a3-account-create-update-mx6xz"] Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.003422 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.013750 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.013969 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.036607 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.036661 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2swrb" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.036804 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.036607 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 17:35:08 crc kubenswrapper[4823]: W0121 17:35:08.160112 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644c6573_fd38_482c_8cef_409affff3581.slice/crio-ba75b1dbd7cdbf3a4b6d956ff0495e966d798e57617272d56ef705436393fdf0 WatchSource:0}: Error finding container ba75b1dbd7cdbf3a4b6d956ff0495e966d798e57617272d56ef705436393fdf0: Status 404 returned error can't find the container with id ba75b1dbd7cdbf3a4b6d956ff0495e966d798e57617272d56ef705436393fdf0 Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.161492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.161608 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.161723 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlx6\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-kube-api-access-sjlx6\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.161799 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1146f69b-d935-4a56-9f65-e96bf9539c14-cache\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.161831 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1146f69b-d935-4a56-9f65-e96bf9539c14-lock\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: E0121 17:35:08.274412 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 17:35:08 crc kubenswrapper[4823]: E0121 17:35:08.274710 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 17:35:08 crc kubenswrapper[4823]: E0121 17:35:08.275051 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift podName:1146f69b-d935-4a56-9f65-e96bf9539c14 nodeName:}" failed. No retries permitted until 2026-01-21 17:35:08.774832444 +0000 UTC m=+1109.700963304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift") pod "swift-storage-0" (UID: "1146f69b-d935-4a56-9f65-e96bf9539c14") : configmap "swift-ring-files" not found Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.274428 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.275530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlx6\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-kube-api-access-sjlx6\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.275708 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1146f69b-d935-4a56-9f65-e96bf9539c14-cache\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.275805 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1146f69b-d935-4a56-9f65-e96bf9539c14-lock\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.275906 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.276396 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.277186 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1146f69b-d935-4a56-9f65-e96bf9539c14-cache\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.277247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1146f69b-d935-4a56-9f65-e96bf9539c14-lock\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.300667 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlx6\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-kube-api-access-sjlx6\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.303834 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.571301 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w2wtt"] Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.572835 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.574919 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.577426 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.577434 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.586274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2wtt"] Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.619349 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.681502 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689078 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-swiftconf\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689128 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-combined-ca-bundle\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689168 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-etc-swift\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689206 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-dispersionconf\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-scripts\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689294 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfxg\" (UniqueName: \"kubernetes.io/projected/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-kube-api-access-fzfxg\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.689329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-ring-data-devices\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.699917 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.704671 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.790697 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvx6d\" (UniqueName: \"kubernetes.io/projected/5fea18d0-be48-4202-b830-80f527487892-kube-api-access-mvx6d\") pod \"5fea18d0-be48-4202-b830-80f527487892\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.790755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnsxt\" (UniqueName: \"kubernetes.io/projected/e8417175-6b47-4fd9-961c-e422480b3353-kube-api-access-tnsxt\") pod \"e8417175-6b47-4fd9-961c-e422480b3353\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.790906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fea18d0-be48-4202-b830-80f527487892-operator-scripts\") pod \"5fea18d0-be48-4202-b830-80f527487892\" (UID: \"5fea18d0-be48-4202-b830-80f527487892\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791017 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8417175-6b47-4fd9-961c-e422480b3353-operator-scripts\") pod \"e8417175-6b47-4fd9-961c-e422480b3353\" (UID: \"e8417175-6b47-4fd9-961c-e422480b3353\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfxg\" (UniqueName: \"kubernetes.io/projected/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-kube-api-access-fzfxg\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791323 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791349 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-ring-data-devices\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791409 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-swiftconf\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-combined-ca-bundle\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791486 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-etc-swift\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-dispersionconf\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-scripts\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.791893 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fea18d0-be48-4202-b830-80f527487892-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fea18d0-be48-4202-b830-80f527487892" (UID: "5fea18d0-be48-4202-b830-80f527487892"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.792490 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-scripts\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.792908 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8417175-6b47-4fd9-961c-e422480b3353-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8417175-6b47-4fd9-961c-e422480b3353" (UID: "e8417175-6b47-4fd9-961c-e422480b3353"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: E0121 17:35:08.793448 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 17:35:08 crc kubenswrapper[4823]: E0121 17:35:08.793470 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 17:35:08 crc kubenswrapper[4823]: E0121 17:35:08.793520 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift podName:1146f69b-d935-4a56-9f65-e96bf9539c14 nodeName:}" failed. No retries permitted until 2026-01-21 17:35:09.793499757 +0000 UTC m=+1110.719630617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift") pod "swift-storage-0" (UID: "1146f69b-d935-4a56-9f65-e96bf9539c14") : configmap "swift-ring-files" not found Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.794285 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-ring-data-devices\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.794437 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-etc-swift\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.796388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8417175-6b47-4fd9-961c-e422480b3353-kube-api-access-tnsxt" (OuterVolumeSpecName: "kube-api-access-tnsxt") pod "e8417175-6b47-4fd9-961c-e422480b3353" (UID: "e8417175-6b47-4fd9-961c-e422480b3353"). InnerVolumeSpecName "kube-api-access-tnsxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.796923 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-combined-ca-bundle\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.797002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-dispersionconf\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.797018 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fea18d0-be48-4202-b830-80f527487892-kube-api-access-mvx6d" (OuterVolumeSpecName: "kube-api-access-mvx6d") pod "5fea18d0-be48-4202-b830-80f527487892" (UID: "5fea18d0-be48-4202-b830-80f527487892"). InnerVolumeSpecName "kube-api-access-mvx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.799197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-swiftconf\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.809765 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfxg\" (UniqueName: \"kubernetes.io/projected/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-kube-api-access-fzfxg\") pod \"swift-ring-rebalance-w2wtt\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.892810 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-operator-scripts\") pod \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.892958 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-operator-scripts\") pod \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.893038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqh7\" (UniqueName: \"kubernetes.io/projected/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-kube-api-access-txqh7\") pod \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\" (UID: \"c67c073d-2bdd-4bcc-a73e-9eed04a74f17\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.893111 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf5k7\" (UniqueName: \"kubernetes.io/projected/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-kube-api-access-vf5k7\") pod \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\" (UID: \"e6a94bd0-3d15-4c8d-8974-af4f7772cc74\") " Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.893294 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c67c073d-2bdd-4bcc-a73e-9eed04a74f17" (UID: "c67c073d-2bdd-4bcc-a73e-9eed04a74f17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.893497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6a94bd0-3d15-4c8d-8974-af4f7772cc74" (UID: "e6a94bd0-3d15-4c8d-8974-af4f7772cc74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.894066 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.894087 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8417175-6b47-4fd9-961c-e422480b3353-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.894100 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvx6d\" (UniqueName: \"kubernetes.io/projected/5fea18d0-be48-4202-b830-80f527487892-kube-api-access-mvx6d\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.894114 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnsxt\" (UniqueName: \"kubernetes.io/projected/e8417175-6b47-4fd9-961c-e422480b3353-kube-api-access-tnsxt\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.894126 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.894138 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fea18d0-be48-4202-b830-80f527487892-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.896817 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-kube-api-access-vf5k7" (OuterVolumeSpecName: "kube-api-access-vf5k7") pod "e6a94bd0-3d15-4c8d-8974-af4f7772cc74" (UID: "e6a94bd0-3d15-4c8d-8974-af4f7772cc74"). InnerVolumeSpecName "kube-api-access-vf5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.897339 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-kube-api-access-txqh7" (OuterVolumeSpecName: "kube-api-access-txqh7") pod "c67c073d-2bdd-4bcc-a73e-9eed04a74f17" (UID: "c67c073d-2bdd-4bcc-a73e-9eed04a74f17"). InnerVolumeSpecName "kube-api-access-txqh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.932678 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.934916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-q27fp" event={"ID":"2a901c8f-bafb-433a-8de8-dd20b97f927d","Type":"ContainerStarted","Data":"d3322b6d8c50f9526b2340e0d354f64ad043daccd18de494b444b0edce175e17"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.946080 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5jn7l" event={"ID":"e8417175-6b47-4fd9-961c-e422480b3353","Type":"ContainerDied","Data":"f50334cb1b8f384dcfe7d7288d78f1c0a70428afd95665db6838b9002b87a1fa"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.946117 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50334cb1b8f384dcfe7d7288d78f1c0a70428afd95665db6838b9002b87a1fa" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.946171 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5jn7l" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.959933 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8wgkf" event={"ID":"5fea18d0-be48-4202-b830-80f527487892","Type":"ContainerDied","Data":"001ef7fe6c4fe6c821991ac5afe4645938d15475036a7dbf69f2d9d55160e601"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.959965 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8wgkf" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.959974 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="001ef7fe6c4fe6c821991ac5afe4645938d15475036a7dbf69f2d9d55160e601" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.962348 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8070-account-create-update-mpj6v" event={"ID":"e6a94bd0-3d15-4c8d-8974-af4f7772cc74","Type":"ContainerDied","Data":"3a7057bbc00ccd56e25819bc6e84d4a8328afd149d771eb105b4477878b7c011"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.962402 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8070-account-create-update-mpj6v" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.962467 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-q27fp" podStartSLOduration=2.962444721 podStartE2EDuration="2.962444721s" podCreationTimestamp="2026-01-21 17:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:08.954388552 +0000 UTC m=+1109.880519432" watchObservedRunningTime="2026-01-21 17:35:08.962444721 +0000 UTC m=+1109.888575571" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.962408 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7057bbc00ccd56e25819bc6e84d4a8328afd149d771eb105b4477878b7c011" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.965046 4823 generic.go:334] "Generic (PLEG): container finished" podID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerID="1c512bc6b034e7740f216daf16223dd5d15e25b5efd7381b139594befcae59d4" exitCode=0 Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.965104 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" event={"ID":"00ca24fe-07a5-4b20-a18d-5f62c510030a","Type":"ContainerDied","Data":"1c512bc6b034e7740f216daf16223dd5d15e25b5efd7381b139594befcae59d4"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.967444 4823 generic.go:334] "Generic (PLEG): container finished" podID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerID="72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07" exitCode=0 Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.967532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dd8ea30-a041-4ce6-8a36-b8a355b076dc","Type":"ContainerDied","Data":"72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.970050 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c720-account-create-update-ph4fq" event={"ID":"c67c073d-2bdd-4bcc-a73e-9eed04a74f17","Type":"ContainerDied","Data":"d18a7b31fe6fe60ac532875f7547a8eed2593a06324d9a9962215f9fe1d2bfb5"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.970089 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d18a7b31fe6fe60ac532875f7547a8eed2593a06324d9a9962215f9fe1d2bfb5" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.970103 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c720-account-create-update-ph4fq" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.971228 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-e2a3-account-create-update-mx6xz" event={"ID":"644c6573-fd38-482c-8cef-409affff3581","Type":"ContainerStarted","Data":"97ce6be46288dcd33a81e5bb9ff4651f313862b442b113096dc8ef87e8c7f9a1"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.971261 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-e2a3-account-create-update-mx6xz" event={"ID":"644c6573-fd38-482c-8cef-409affff3581","Type":"ContainerStarted","Data":"ba75b1dbd7cdbf3a4b6d956ff0495e966d798e57617272d56ef705436393fdf0"} Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.995446 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf5k7\" (UniqueName: \"kubernetes.io/projected/e6a94bd0-3d15-4c8d-8974-af4f7772cc74-kube-api-access-vf5k7\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:08 crc kubenswrapper[4823]: I0121 17:35:08.995485 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqh7\" (UniqueName: \"kubernetes.io/projected/c67c073d-2bdd-4bcc-a73e-9eed04a74f17-kube-api-access-txqh7\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.016150 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-e2a3-account-create-update-mx6xz" podStartSLOduration=3.016121857 podStartE2EDuration="3.016121857s" podCreationTimestamp="2026-01-21 17:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:09.000087561 +0000 UTC m=+1109.926218441" watchObservedRunningTime="2026-01-21 17:35:09.016121857 +0000 UTC m=+1109.942252717" Jan 21 17:35:09 crc kubenswrapper[4823]: W0121 17:35:09.449962 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfea14ef_6f13_4b56_99c5_74c8bb2d5e43.slice/crio-f3f520c718abbcceec97328567be941e3e662a1016ef7a61d7f9ccf7288bb9a3 WatchSource:0}: Error finding container f3f520c718abbcceec97328567be941e3e662a1016ef7a61d7f9ccf7288bb9a3: Status 404 returned error can't find the container with id f3f520c718abbcceec97328567be941e3e662a1016ef7a61d7f9ccf7288bb9a3 Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.450172 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2wtt"] Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.811287 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.811480 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.811496 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.811541 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift podName:1146f69b-d935-4a56-9f65-e96bf9539c14 nodeName:}" failed. No retries permitted until 2026-01-21 17:35:11.811526017 +0000 UTC m=+1112.737656877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift") pod "swift-storage-0" (UID: "1146f69b-d935-4a56-9f65-e96bf9539c14") : configmap "swift-ring-files" not found Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972203 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m2t5j"] Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.972564 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67c073d-2bdd-4bcc-a73e-9eed04a74f17" containerName="mariadb-account-create-update" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972578 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67c073d-2bdd-4bcc-a73e-9eed04a74f17" containerName="mariadb-account-create-update" Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.972601 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a94bd0-3d15-4c8d-8974-af4f7772cc74" containerName="mariadb-account-create-update" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972607 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a94bd0-3d15-4c8d-8974-af4f7772cc74" containerName="mariadb-account-create-update" Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.972624 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8417175-6b47-4fd9-961c-e422480b3353" containerName="mariadb-database-create" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972629 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8417175-6b47-4fd9-961c-e422480b3353" containerName="mariadb-database-create" Jan 21 17:35:09 crc kubenswrapper[4823]: E0121 17:35:09.972642 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fea18d0-be48-4202-b830-80f527487892" containerName="mariadb-database-create" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972647 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fea18d0-be48-4202-b830-80f527487892" containerName="mariadb-database-create" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972798 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fea18d0-be48-4202-b830-80f527487892" containerName="mariadb-database-create" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972812 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a94bd0-3d15-4c8d-8974-af4f7772cc74" containerName="mariadb-account-create-update" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972824 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8417175-6b47-4fd9-961c-e422480b3353" containerName="mariadb-database-create" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.972842 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67c073d-2bdd-4bcc-a73e-9eed04a74f17" containerName="mariadb-account-create-update" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.973442 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.980172 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m2t5j"] Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.981526 4823 generic.go:334] "Generic (PLEG): container finished" podID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerID="bf8050866cc839063b75d98e01383637e0e7b6575b76d4460b5d2bb06be62370" exitCode=0 Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.981625 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"619d3aad-c1a1-4d30-ac6f-a0b9535371dc","Type":"ContainerDied","Data":"bf8050866cc839063b75d98e01383637e0e7b6575b76d4460b5d2bb06be62370"} Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.983888 4823 generic.go:334] "Generic (PLEG): container finished" podID="2a901c8f-bafb-433a-8de8-dd20b97f927d" containerID="d3322b6d8c50f9526b2340e0d354f64ad043daccd18de494b444b0edce175e17" exitCode=0 Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.983919 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-q27fp" event={"ID":"2a901c8f-bafb-433a-8de8-dd20b97f927d","Type":"ContainerDied","Data":"d3322b6d8c50f9526b2340e0d354f64ad043daccd18de494b444b0edce175e17"} Jan 21 17:35:09 crc kubenswrapper[4823]: I0121 17:35:09.988542 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2wtt" event={"ID":"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43","Type":"ContainerStarted","Data":"f3f520c718abbcceec97328567be941e3e662a1016ef7a61d7f9ccf7288bb9a3"} Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.083473 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5f55-account-create-update-t6t7d"] Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.084982 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.088775 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.102338 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5f55-account-create-update-t6t7d"] Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.125767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d755b5dc-e412-4301-a237-a0228a9378f4-operator-scripts\") pod \"glance-db-create-m2t5j\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.125934 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxrk\" (UniqueName: \"kubernetes.io/projected/d755b5dc-e412-4301-a237-a0228a9378f4-kube-api-access-tvxrk\") pod \"glance-db-create-m2t5j\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.228028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-operator-scripts\") pod \"glance-5f55-account-create-update-t6t7d\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.228081 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zb7\" (UniqueName: \"kubernetes.io/projected/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-kube-api-access-r9zb7\") pod \"glance-5f55-account-create-update-t6t7d\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.228347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d755b5dc-e412-4301-a237-a0228a9378f4-operator-scripts\") pod \"glance-db-create-m2t5j\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.228665 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxrk\" (UniqueName: \"kubernetes.io/projected/d755b5dc-e412-4301-a237-a0228a9378f4-kube-api-access-tvxrk\") pod \"glance-db-create-m2t5j\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.229192 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d755b5dc-e412-4301-a237-a0228a9378f4-operator-scripts\") pod \"glance-db-create-m2t5j\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.248457 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxrk\" (UniqueName: \"kubernetes.io/projected/d755b5dc-e412-4301-a237-a0228a9378f4-kube-api-access-tvxrk\") pod \"glance-db-create-m2t5j\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.304256 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.330783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-operator-scripts\") pod \"glance-5f55-account-create-update-t6t7d\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.330840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zb7\" (UniqueName: \"kubernetes.io/projected/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-kube-api-access-r9zb7\") pod \"glance-5f55-account-create-update-t6t7d\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.332223 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-operator-scripts\") pod \"glance-5f55-account-create-update-t6t7d\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.346830 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zb7\" (UniqueName: \"kubernetes.io/projected/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-kube-api-access-r9zb7\") pod \"glance-5f55-account-create-update-t6t7d\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.443380 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:10 crc kubenswrapper[4823]: I0121 17:35:10.819342 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m2t5j"] Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.003119 4823 generic.go:334] "Generic (PLEG): container finished" podID="644c6573-fd38-482c-8cef-409affff3581" containerID="97ce6be46288dcd33a81e5bb9ff4651f313862b442b113096dc8ef87e8c7f9a1" exitCode=0 Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.003221 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-e2a3-account-create-update-mx6xz" event={"ID":"644c6573-fd38-482c-8cef-409affff3581","Type":"ContainerDied","Data":"97ce6be46288dcd33a81e5bb9ff4651f313862b442b113096dc8ef87e8c7f9a1"} Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.006965 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"619d3aad-c1a1-4d30-ac6f-a0b9535371dc","Type":"ContainerStarted","Data":"1ebf3a1bc690a6fb2b3fbf1fd41bc007dc0aa35efc3e7bfe807612fc6f7960d4"} Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.007322 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.009572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m2t5j" event={"ID":"d755b5dc-e412-4301-a237-a0228a9378f4","Type":"ContainerStarted","Data":"f48579fa15bd083b85c54d78dc01289f44a54c551e70a3f10aee1818862481e1"} Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.013608 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" event={"ID":"00ca24fe-07a5-4b20-a18d-5f62c510030a","Type":"ContainerStarted","Data":"00e4a785cada32a80be87f9e5206df3ac28a9ced6bc0db7f7c58110d900b102f"} Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.014466 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.020025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dd8ea30-a041-4ce6-8a36-b8a355b076dc","Type":"ContainerStarted","Data":"37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6"} Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.020372 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:35:11 crc kubenswrapper[4823]: W0121 17:35:11.040714 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f3d6f07_c8ed_4bcd_b03c_c1c371f1af1a.slice/crio-439e9160afcf5a6b9d5560259b5cb03d0e88b9896265daf425a16971dea31fc8 WatchSource:0}: Error finding container 439e9160afcf5a6b9d5560259b5cb03d0e88b9896265daf425a16971dea31fc8: Status 404 returned error can't find the container with id 439e9160afcf5a6b9d5560259b5cb03d0e88b9896265daf425a16971dea31fc8 Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.047665 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5f55-account-create-update-t6t7d"] Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.050661 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podStartSLOduration=5.050640567 podStartE2EDuration="5.050640567s" podCreationTimestamp="2026-01-21 17:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:11.040568638 +0000 UTC m=+1111.966699498" watchObservedRunningTime="2026-01-21 17:35:11.050640567 +0000 UTC m=+1111.976771427" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.078799 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.67132612 podStartE2EDuration="1m1.078779662s" podCreationTimestamp="2026-01-21 17:34:10 +0000 UTC" firstStartedPulling="2026-01-21 17:34:12.446779523 +0000 UTC m=+1053.372910383" lastFinishedPulling="2026-01-21 17:34:31.854233065 +0000 UTC m=+1072.780363925" observedRunningTime="2026-01-21 17:35:11.072572059 +0000 UTC m=+1111.998702919" watchObservedRunningTime="2026-01-21 17:35:11.078779662 +0000 UTC m=+1112.004910522" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.100703 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.013538387 podStartE2EDuration="1m1.100685223s" podCreationTimestamp="2026-01-21 17:34:10 +0000 UTC" firstStartedPulling="2026-01-21 17:34:12.751462483 +0000 UTC m=+1053.677593343" lastFinishedPulling="2026-01-21 17:34:31.838609319 +0000 UTC m=+1072.764740179" observedRunningTime="2026-01-21 17:35:11.095039584 +0000 UTC m=+1112.021170444" watchObservedRunningTime="2026-01-21 17:35:11.100685223 +0000 UTC m=+1112.026816073" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.466819 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.565047 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a901c8f-bafb-433a-8de8-dd20b97f927d-operator-scripts\") pod \"2a901c8f-bafb-433a-8de8-dd20b97f927d\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.565120 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpd7\" (UniqueName: \"kubernetes.io/projected/2a901c8f-bafb-433a-8de8-dd20b97f927d-kube-api-access-5xpd7\") pod \"2a901c8f-bafb-433a-8de8-dd20b97f927d\" (UID: \"2a901c8f-bafb-433a-8de8-dd20b97f927d\") " Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.566192 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a901c8f-bafb-433a-8de8-dd20b97f927d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a901c8f-bafb-433a-8de8-dd20b97f927d" (UID: "2a901c8f-bafb-433a-8de8-dd20b97f927d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.571205 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a901c8f-bafb-433a-8de8-dd20b97f927d-kube-api-access-5xpd7" (OuterVolumeSpecName: "kube-api-access-5xpd7") pod "2a901c8f-bafb-433a-8de8-dd20b97f927d" (UID: "2a901c8f-bafb-433a-8de8-dd20b97f927d"). InnerVolumeSpecName "kube-api-access-5xpd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.667494 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a901c8f-bafb-433a-8de8-dd20b97f927d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.667549 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpd7\" (UniqueName: \"kubernetes.io/projected/2a901c8f-bafb-433a-8de8-dd20b97f927d-kube-api-access-5xpd7\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:11 crc kubenswrapper[4823]: I0121 17:35:11.871479 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:11 crc kubenswrapper[4823]: E0121 17:35:11.871662 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 17:35:11 crc kubenswrapper[4823]: E0121 17:35:11.871688 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 17:35:11 crc kubenswrapper[4823]: E0121 17:35:11.871744 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift podName:1146f69b-d935-4a56-9f65-e96bf9539c14 nodeName:}" failed. No retries permitted until 2026-01-21 17:35:15.871728091 +0000 UTC m=+1116.797858951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift") pod "swift-storage-0" (UID: "1146f69b-d935-4a56-9f65-e96bf9539c14") : configmap "swift-ring-files" not found Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.030820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-q27fp" event={"ID":"2a901c8f-bafb-433a-8de8-dd20b97f927d","Type":"ContainerDied","Data":"07c32468d16b7431a9da2aad0e6d101ea915e4e759848e9abe6df93260ac69b5"} Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.031177 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c32468d16b7431a9da2aad0e6d101ea915e4e759848e9abe6df93260ac69b5" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.030842 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-q27fp" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.032074 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f55-account-create-update-t6t7d" event={"ID":"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a","Type":"ContainerStarted","Data":"439e9160afcf5a6b9d5560259b5cb03d0e88b9896265daf425a16971dea31fc8"} Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.088499 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9wtbv"] Jan 21 17:35:12 crc kubenswrapper[4823]: E0121 17:35:12.088994 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a901c8f-bafb-433a-8de8-dd20b97f927d" containerName="mariadb-database-create" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.089016 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a901c8f-bafb-433a-8de8-dd20b97f927d" containerName="mariadb-database-create" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.089234 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a901c8f-bafb-433a-8de8-dd20b97f927d" containerName="mariadb-database-create" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.089984 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.093255 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.113985 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9wtbv"] Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.185506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbbx\" (UniqueName: \"kubernetes.io/projected/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-kube-api-access-tkbbx\") pod \"root-account-create-update-9wtbv\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.185617 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-operator-scripts\") pod \"root-account-create-update-9wtbv\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.287529 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkbbx\" (UniqueName: \"kubernetes.io/projected/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-kube-api-access-tkbbx\") pod \"root-account-create-update-9wtbv\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.287629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-operator-scripts\") pod \"root-account-create-update-9wtbv\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.288548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-operator-scripts\") pod \"root-account-create-update-9wtbv\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.321507 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkbbx\" (UniqueName: \"kubernetes.io/projected/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-kube-api-access-tkbbx\") pod \"root-account-create-update-9wtbv\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.383955 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.408339 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.495642 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644c6573-fd38-482c-8cef-409affff3581-operator-scripts\") pod \"644c6573-fd38-482c-8cef-409affff3581\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.496365 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr24s\" (UniqueName: \"kubernetes.io/projected/644c6573-fd38-482c-8cef-409affff3581-kube-api-access-fr24s\") pod \"644c6573-fd38-482c-8cef-409affff3581\" (UID: \"644c6573-fd38-482c-8cef-409affff3581\") " Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.498549 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644c6573-fd38-482c-8cef-409affff3581-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "644c6573-fd38-482c-8cef-409affff3581" (UID: "644c6573-fd38-482c-8cef-409affff3581"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.503194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644c6573-fd38-482c-8cef-409affff3581-kube-api-access-fr24s" (OuterVolumeSpecName: "kube-api-access-fr24s") pod "644c6573-fd38-482c-8cef-409affff3581" (UID: "644c6573-fd38-482c-8cef-409affff3581"). InnerVolumeSpecName "kube-api-access-fr24s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.598583 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr24s\" (UniqueName: \"kubernetes.io/projected/644c6573-fd38-482c-8cef-409affff3581-kube-api-access-fr24s\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.598622 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644c6573-fd38-482c-8cef-409affff3581-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:12 crc kubenswrapper[4823]: I0121 17:35:12.747342 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9wtbv"] Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.048692 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m2t5j" event={"ID":"d755b5dc-e412-4301-a237-a0228a9378f4","Type":"ContainerStarted","Data":"8ddd086bba06196dc787b5a909e279cd771c54066edc3ab82d746a642af13bb7"} Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.063162 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f55-account-create-update-t6t7d" event={"ID":"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a","Type":"ContainerStarted","Data":"29bb2602b9fda9898bf18f9ed69381733427a046c385cfd662b682bde67c5ef3"} Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.071772 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-e2a3-account-create-update-mx6xz" Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.071979 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-e2a3-account-create-update-mx6xz" event={"ID":"644c6573-fd38-482c-8cef-409affff3581","Type":"ContainerDied","Data":"ba75b1dbd7cdbf3a4b6d956ff0495e966d798e57617272d56ef705436393fdf0"} Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.072156 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba75b1dbd7cdbf3a4b6d956ff0495e966d798e57617272d56ef705436393fdf0" Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.077724 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-m2t5j" podStartSLOduration=4.077695183 podStartE2EDuration="4.077695183s" podCreationTimestamp="2026-01-21 17:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:13.067981273 +0000 UTC m=+1113.994112133" watchObservedRunningTime="2026-01-21 17:35:13.077695183 +0000 UTC m=+1114.003826043" Jan 21 17:35:13 crc kubenswrapper[4823]: I0121 17:35:13.102590 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5f55-account-create-update-t6t7d" podStartSLOduration=3.102565117 podStartE2EDuration="3.102565117s" podCreationTimestamp="2026-01-21 17:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:13.094682843 +0000 UTC m=+1114.020813713" watchObservedRunningTime="2026-01-21 17:35:13.102565117 +0000 UTC m=+1114.028695987" Jan 21 17:35:14 crc kubenswrapper[4823]: I0121 17:35:14.087404 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 17:35:14 crc kubenswrapper[4823]: I0121 17:35:14.088321 4823 generic.go:334] "Generic (PLEG): container finished" podID="d755b5dc-e412-4301-a237-a0228a9378f4" containerID="8ddd086bba06196dc787b5a909e279cd771c54066edc3ab82d746a642af13bb7" exitCode=0 Jan 21 17:35:14 crc kubenswrapper[4823]: I0121 17:35:14.088380 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m2t5j" event={"ID":"d755b5dc-e412-4301-a237-a0228a9378f4","Type":"ContainerDied","Data":"8ddd086bba06196dc787b5a909e279cd771c54066edc3ab82d746a642af13bb7"} Jan 21 17:35:14 crc kubenswrapper[4823]: I0121 17:35:14.091578 4823 generic.go:334] "Generic (PLEG): container finished" podID="1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" containerID="29bb2602b9fda9898bf18f9ed69381733427a046c385cfd662b682bde67c5ef3" exitCode=0 Jan 21 17:35:14 crc kubenswrapper[4823]: I0121 17:35:14.091624 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f55-account-create-update-t6t7d" event={"ID":"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a","Type":"ContainerDied","Data":"29bb2602b9fda9898bf18f9ed69381733427a046c385cfd662b682bde67c5ef3"} Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.070311 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.070398 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.803026 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.869586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d755b5dc-e412-4301-a237-a0228a9378f4-operator-scripts\") pod \"d755b5dc-e412-4301-a237-a0228a9378f4\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.869674 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxrk\" (UniqueName: \"kubernetes.io/projected/d755b5dc-e412-4301-a237-a0228a9378f4-kube-api-access-tvxrk\") pod \"d755b5dc-e412-4301-a237-a0228a9378f4\" (UID: \"d755b5dc-e412-4301-a237-a0228a9378f4\") " Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.871605 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d755b5dc-e412-4301-a237-a0228a9378f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d755b5dc-e412-4301-a237-a0228a9378f4" (UID: "d755b5dc-e412-4301-a237-a0228a9378f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.876662 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d755b5dc-e412-4301-a237-a0228a9378f4-kube-api-access-tvxrk" (OuterVolumeSpecName: "kube-api-access-tvxrk") pod "d755b5dc-e412-4301-a237-a0228a9378f4" (UID: "d755b5dc-e412-4301-a237-a0228a9378f4"). InnerVolumeSpecName "kube-api-access-tvxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.894269 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.971286 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.971527 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d755b5dc-e412-4301-a237-a0228a9378f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:15 crc kubenswrapper[4823]: I0121 17:35:15.971548 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxrk\" (UniqueName: \"kubernetes.io/projected/d755b5dc-e412-4301-a237-a0228a9378f4-kube-api-access-tvxrk\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:15 crc kubenswrapper[4823]: E0121 17:35:15.971697 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 17:35:15 crc kubenswrapper[4823]: E0121 17:35:15.971723 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 17:35:15 crc kubenswrapper[4823]: E0121 17:35:15.971766 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift podName:1146f69b-d935-4a56-9f65-e96bf9539c14 nodeName:}" failed. No retries permitted until 2026-01-21 17:35:23.971749257 +0000 UTC m=+1124.897880107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift") pod "swift-storage-0" (UID: "1146f69b-d935-4a56-9f65-e96bf9539c14") : configmap "swift-ring-files" not found Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.073429 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9zb7\" (UniqueName: \"kubernetes.io/projected/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-kube-api-access-r9zb7\") pod \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.073504 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-operator-scripts\") pod \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\" (UID: \"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a\") " Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.074512 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" (UID: "1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.078698 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-kube-api-access-r9zb7" (OuterVolumeSpecName: "kube-api-access-r9zb7") pod "1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" (UID: "1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a"). InnerVolumeSpecName "kube-api-access-r9zb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.126282 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2wtt" event={"ID":"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43","Type":"ContainerStarted","Data":"c755567505367d945d19a184e6429b8f8c910b8cb650a1d094a870dce26c74a0"} Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.128205 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9wtbv" event={"ID":"268d9f8b-8ddf-41a8-8d84-9d948ba3609b","Type":"ContainerStarted","Data":"a8dde4e645aceeb5df0f82e153a8eb1cdc085fce6d6a09783197107af6a51f96"} Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.128248 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9wtbv" event={"ID":"268d9f8b-8ddf-41a8-8d84-9d948ba3609b","Type":"ContainerStarted","Data":"b227f8a35f9cbf39f99334c0920bd28f38346c08c61142c150a15bf428f7fdc0"} Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.131181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m2t5j" event={"ID":"d755b5dc-e412-4301-a237-a0228a9378f4","Type":"ContainerDied","Data":"f48579fa15bd083b85c54d78dc01289f44a54c551e70a3f10aee1818862481e1"} Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.131405 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48579fa15bd083b85c54d78dc01289f44a54c551e70a3f10aee1818862481e1" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.131206 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m2t5j" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.133293 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f55-account-create-update-t6t7d" event={"ID":"1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a","Type":"ContainerDied","Data":"439e9160afcf5a6b9d5560259b5cb03d0e88b9896265daf425a16971dea31fc8"} Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.133322 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439e9160afcf5a6b9d5560259b5cb03d0e88b9896265daf425a16971dea31fc8" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.133348 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f55-account-create-update-t6t7d" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.150610 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-w2wtt" podStartSLOduration=2.043379905 podStartE2EDuration="8.150580415s" podCreationTimestamp="2026-01-21 17:35:08 +0000 UTC" firstStartedPulling="2026-01-21 17:35:09.453526103 +0000 UTC m=+1110.379656963" lastFinishedPulling="2026-01-21 17:35:15.560726613 +0000 UTC m=+1116.486857473" observedRunningTime="2026-01-21 17:35:16.146206306 +0000 UTC m=+1117.072337176" watchObservedRunningTime="2026-01-21 17:35:16.150580415 +0000 UTC m=+1117.076711275" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.176228 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9zb7\" (UniqueName: \"kubernetes.io/projected/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-kube-api-access-r9zb7\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.176269 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.184816 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9wtbv" podStartSLOduration=4.18479679 podStartE2EDuration="4.18479679s" podCreationTimestamp="2026-01-21 17:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:16.175349416 +0000 UTC m=+1117.101480276" watchObservedRunningTime="2026-01-21 17:35:16.18479679 +0000 UTC m=+1117.110927650" Jan 21 17:35:16 crc kubenswrapper[4823]: I0121 17:35:16.891241 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.141040 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.147947 4823 generic.go:334] "Generic (PLEG): container finished" podID="268d9f8b-8ddf-41a8-8d84-9d948ba3609b" containerID="a8dde4e645aceeb5df0f82e153a8eb1cdc085fce6d6a09783197107af6a51f96" exitCode=0 Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.149063 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9wtbv" event={"ID":"268d9f8b-8ddf-41a8-8d84-9d948ba3609b","Type":"ContainerDied","Data":"a8dde4e645aceeb5df0f82e153a8eb1cdc085fce6d6a09783197107af6a51f96"} Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.231655 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-r4c8w"] Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.231997 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-r4c8w" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" containerName="dnsmasq-dns" containerID="cri-o://69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e" gracePeriod=10 Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.739768 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.847846 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mhx5\" (UniqueName: \"kubernetes.io/projected/78df6398-7de0-4017-ac89-70c5fd48130d-kube-api-access-6mhx5\") pod \"78df6398-7de0-4017-ac89-70c5fd48130d\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.847992 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-config\") pod \"78df6398-7de0-4017-ac89-70c5fd48130d\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.848116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-nb\") pod \"78df6398-7de0-4017-ac89-70c5fd48130d\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.848148 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-sb\") pod \"78df6398-7de0-4017-ac89-70c5fd48130d\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.848182 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-dns-svc\") pod \"78df6398-7de0-4017-ac89-70c5fd48130d\" (UID: \"78df6398-7de0-4017-ac89-70c5fd48130d\") " Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.867503 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78df6398-7de0-4017-ac89-70c5fd48130d-kube-api-access-6mhx5" (OuterVolumeSpecName: "kube-api-access-6mhx5") pod "78df6398-7de0-4017-ac89-70c5fd48130d" (UID: "78df6398-7de0-4017-ac89-70c5fd48130d"). InnerVolumeSpecName "kube-api-access-6mhx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.896142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78df6398-7de0-4017-ac89-70c5fd48130d" (UID: "78df6398-7de0-4017-ac89-70c5fd48130d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.902528 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78df6398-7de0-4017-ac89-70c5fd48130d" (UID: "78df6398-7de0-4017-ac89-70c5fd48130d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.914485 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-config" (OuterVolumeSpecName: "config") pod "78df6398-7de0-4017-ac89-70c5fd48130d" (UID: "78df6398-7de0-4017-ac89-70c5fd48130d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.936676 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78df6398-7de0-4017-ac89-70c5fd48130d" (UID: "78df6398-7de0-4017-ac89-70c5fd48130d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.951375 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mhx5\" (UniqueName: \"kubernetes.io/projected/78df6398-7de0-4017-ac89-70c5fd48130d-kube-api-access-6mhx5\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.951422 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.951438 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.951451 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:17 crc kubenswrapper[4823]: I0121 17:35:17.951462 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78df6398-7de0-4017-ac89-70c5fd48130d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.160742 4823 generic.go:334] "Generic (PLEG): container finished" podID="78df6398-7de0-4017-ac89-70c5fd48130d" containerID="69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e" exitCode=0 Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.161015 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-r4c8w" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.161698 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-r4c8w" event={"ID":"78df6398-7de0-4017-ac89-70c5fd48130d","Type":"ContainerDied","Data":"69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e"} Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.161750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-r4c8w" event={"ID":"78df6398-7de0-4017-ac89-70c5fd48130d","Type":"ContainerDied","Data":"2f39ed4e6c0e2a497a6f5250bdbb864a817c56782c0789cbab211914608ca8fe"} Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.161771 4823 scope.go:117] "RemoveContainer" containerID="69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.208500 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-r4c8w"] Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.215590 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-r4c8w"] Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.251626 4823 scope.go:117] "RemoveContainer" containerID="d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.285296 4823 scope.go:117] "RemoveContainer" containerID="69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e" Jan 21 17:35:18 crc kubenswrapper[4823]: E0121 17:35:18.287672 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e\": container with ID starting with 69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e not found: ID does not exist" containerID="69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.287733 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e"} err="failed to get container status \"69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e\": rpc error: code = NotFound desc = could not find container \"69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e\": container with ID starting with 69a2abba29d7aeccfe5329043d8de1f524afc059827b57141ae634d6c6ce902e not found: ID does not exist" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.287760 4823 scope.go:117] "RemoveContainer" containerID="d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447" Jan 21 17:35:18 crc kubenswrapper[4823]: E0121 17:35:18.294418 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447\": container with ID starting with d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447 not found: ID does not exist" containerID="d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.294485 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447"} err="failed to get container status \"d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447\": rpc error: code = NotFound desc = could not find container \"d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447\": container with ID starting with d948dab211d1f11ec353d13bf2f695ddfe080c5a79f92020ce43801eb68c5447 not found: ID does not exist" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.499618 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.563079 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkbbx\" (UniqueName: \"kubernetes.io/projected/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-kube-api-access-tkbbx\") pod \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.563186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-operator-scripts\") pod \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\" (UID: \"268d9f8b-8ddf-41a8-8d84-9d948ba3609b\") " Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.563961 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "268d9f8b-8ddf-41a8-8d84-9d948ba3609b" (UID: "268d9f8b-8ddf-41a8-8d84-9d948ba3609b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.569825 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-kube-api-access-tkbbx" (OuterVolumeSpecName: "kube-api-access-tkbbx") pod "268d9f8b-8ddf-41a8-8d84-9d948ba3609b" (UID: "268d9f8b-8ddf-41a8-8d84-9d948ba3609b"). InnerVolumeSpecName "kube-api-access-tkbbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.665628 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:18 crc kubenswrapper[4823]: I0121 17:35:18.665680 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkbbx\" (UniqueName: \"kubernetes.io/projected/268d9f8b-8ddf-41a8-8d84-9d948ba3609b-kube-api-access-tkbbx\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:19 crc kubenswrapper[4823]: I0121 17:35:19.176456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9wtbv" event={"ID":"268d9f8b-8ddf-41a8-8d84-9d948ba3609b","Type":"ContainerDied","Data":"b227f8a35f9cbf39f99334c0920bd28f38346c08c61142c150a15bf428f7fdc0"} Jan 21 17:35:19 crc kubenswrapper[4823]: I0121 17:35:19.176507 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b227f8a35f9cbf39f99334c0920bd28f38346c08c61142c150a15bf428f7fdc0" Jan 21 17:35:19 crc kubenswrapper[4823]: I0121 17:35:19.176593 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wtbv" Jan 21 17:35:19 crc kubenswrapper[4823]: I0121 17:35:19.360435 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" path="/var/lib/kubelet/pods/78df6398-7de0-4017-ac89-70c5fd48130d/volumes" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161048 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qk9xv"] Jan 21 17:35:20 crc kubenswrapper[4823]: E0121 17:35:20.161788 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" containerName="init" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161806 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" containerName="init" Jan 21 17:35:20 crc kubenswrapper[4823]: E0121 17:35:20.161826 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161834 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: E0121 17:35:20.161878 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" containerName="dnsmasq-dns" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161888 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" containerName="dnsmasq-dns" Jan 21 17:35:20 crc kubenswrapper[4823]: E0121 17:35:20.161912 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644c6573-fd38-482c-8cef-409affff3581" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161921 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="644c6573-fd38-482c-8cef-409affff3581" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: E0121 17:35:20.161933 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268d9f8b-8ddf-41a8-8d84-9d948ba3609b" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161940 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="268d9f8b-8ddf-41a8-8d84-9d948ba3609b" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: E0121 17:35:20.161955 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d755b5dc-e412-4301-a237-a0228a9378f4" containerName="mariadb-database-create" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.161962 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d755b5dc-e412-4301-a237-a0228a9378f4" containerName="mariadb-database-create" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.162136 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="268d9f8b-8ddf-41a8-8d84-9d948ba3609b" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.162155 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d755b5dc-e412-4301-a237-a0228a9378f4" containerName="mariadb-database-create" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.162169 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="78df6398-7de0-4017-ac89-70c5fd48130d" containerName="dnsmasq-dns" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.162184 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="644c6573-fd38-482c-8cef-409affff3581" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.162193 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" containerName="mariadb-account-create-update" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.162917 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.165241 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x59xs" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.166295 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.181193 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qk9xv"] Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.206666 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmclg\" (UniqueName: \"kubernetes.io/projected/8285d29e-4f51-4b0c-8dd9-c613317c933c-kube-api-access-tmclg\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.206808 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-combined-ca-bundle\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.206837 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-db-sync-config-data\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.206885 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-config-data\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.308392 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-combined-ca-bundle\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.308445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-db-sync-config-data\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.308465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-config-data\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.308539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmclg\" (UniqueName: \"kubernetes.io/projected/8285d29e-4f51-4b0c-8dd9-c613317c933c-kube-api-access-tmclg\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.315097 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-config-data\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.315302 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-db-sync-config-data\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.315427 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-combined-ca-bundle\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.332962 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmclg\" (UniqueName: \"kubernetes.io/projected/8285d29e-4f51-4b0c-8dd9-c613317c933c-kube-api-access-tmclg\") pod \"glance-db-sync-qk9xv\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:20 crc kubenswrapper[4823]: I0121 17:35:20.481453 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qk9xv" Jan 21 17:35:21 crc kubenswrapper[4823]: I0121 17:35:21.098272 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qk9xv"] Jan 21 17:35:21 crc kubenswrapper[4823]: I0121 17:35:21.216914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qk9xv" event={"ID":"8285d29e-4f51-4b0c-8dd9-c613317c933c","Type":"ContainerStarted","Data":"0ef60a15417af862f335a7ae3f83812177f6cf066e7d5cce9d684a5f870934a7"} Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.113090 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.230189 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerStarted","Data":"65c57cc2faa2053e3831057252afd0bd6a79afe4474e02a62ed8e48d75eeb9bb"} Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.368149 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.544390 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mhwlz"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.546155 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.560660 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mhwlz"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.598500 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-jd72h"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.606050 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.609767 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.615049 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-ft6vh" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.647087 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-jd72h"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.659417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-operator-scripts\") pod \"cinder-db-create-mhwlz\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.659590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj9mq\" (UniqueName: \"kubernetes.io/projected/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-kube-api-access-qj9mq\") pod \"cinder-db-create-mhwlz\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.678119 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-pmgrr"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.679641 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.693278 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pmgrr"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.763171 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-db-sync-config-data\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.764687 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-combined-ca-bundle\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.764904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-config-data\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.764969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj9mq\" (UniqueName: \"kubernetes.io/projected/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-kube-api-access-qj9mq\") pod \"cinder-db-create-mhwlz\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.765134 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfnr\" (UniqueName: \"kubernetes.io/projected/0d800059-c35e-403a-a930-f1db60cf5c75-kube-api-access-xlfnr\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.765332 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnc5v\" (UniqueName: \"kubernetes.io/projected/e8dc75f0-c0f1-456e-912b-221ee7b6697c-kube-api-access-jnc5v\") pod \"barbican-db-create-pmgrr\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.765415 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-operator-scripts\") pod \"cinder-db-create-mhwlz\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.765454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8dc75f0-c0f1-456e-912b-221ee7b6697c-operator-scripts\") pod \"barbican-db-create-pmgrr\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.767023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-operator-scripts\") pod \"cinder-db-create-mhwlz\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.800163 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x49h2"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.812282 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj9mq\" (UniqueName: \"kubernetes.io/projected/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-kube-api-access-qj9mq\") pod \"cinder-db-create-mhwlz\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.820086 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.840867 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x49h2"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.843481 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jlj99" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.843692 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.844175 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869111 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlfnr\" (UniqueName: \"kubernetes.io/projected/0d800059-c35e-403a-a930-f1db60cf5c75-kube-api-access-xlfnr\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869204 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869218 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnc5v\" (UniqueName: \"kubernetes.io/projected/e8dc75f0-c0f1-456e-912b-221ee7b6697c-kube-api-access-jnc5v\") pod \"barbican-db-create-pmgrr\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8dc75f0-c0f1-456e-912b-221ee7b6697c-operator-scripts\") pod \"barbican-db-create-pmgrr\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-db-sync-config-data\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869595 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-combined-ca-bundle\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.869640 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-config-data\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.876466 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.878082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8dc75f0-c0f1-456e-912b-221ee7b6697c-operator-scripts\") pod \"barbican-db-create-pmgrr\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.883669 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7899-account-create-update-wzjrc"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.893756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-config-data\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.897776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-db-sync-config-data\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.899917 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.908683 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.915615 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-combined-ca-bundle\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.941937 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlfnr\" (UniqueName: \"kubernetes.io/projected/0d800059-c35e-403a-a930-f1db60cf5c75-kube-api-access-xlfnr\") pod \"watcher-db-sync-jd72h\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.952654 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7899-account-create-update-wzjrc"] Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.952724 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnc5v\" (UniqueName: \"kubernetes.io/projected/e8dc75f0-c0f1-456e-912b-221ee7b6697c-kube-api-access-jnc5v\") pod \"barbican-db-create-pmgrr\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.980589 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-combined-ca-bundle\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.980680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-config-data\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.980750 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxzj\" (UniqueName: \"kubernetes.io/projected/dfacc23e-26c3-4308-ab42-371932e13246-kube-api-access-mwxzj\") pod \"barbican-7899-account-create-update-wzjrc\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.980787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfacc23e-26c3-4308-ab42-371932e13246-operator-scripts\") pod \"barbican-7899-account-create-update-wzjrc\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:22 crc kubenswrapper[4823]: I0121 17:35:22.980813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8fk\" (UniqueName: \"kubernetes.io/projected/00e6120a-8367-4154-81be-2f80bc318ac2-kube-api-access-lr8fk\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.011405 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.019684 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fv9fn"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.021647 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.041306 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-504c-account-create-update-xlrsl"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.044063 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.058577 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.071966 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fv9fn"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.082580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxzj\" (UniqueName: \"kubernetes.io/projected/dfacc23e-26c3-4308-ab42-371932e13246-kube-api-access-mwxzj\") pod \"barbican-7899-account-create-update-wzjrc\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.082637 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfacc23e-26c3-4308-ab42-371932e13246-operator-scripts\") pod \"barbican-7899-account-create-update-wzjrc\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.082697 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8fk\" (UniqueName: \"kubernetes.io/projected/00e6120a-8367-4154-81be-2f80bc318ac2-kube-api-access-lr8fk\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.083647 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-combined-ca-bundle\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.083722 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-config-data\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.085869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfacc23e-26c3-4308-ab42-371932e13246-operator-scripts\") pod \"barbican-7899-account-create-update-wzjrc\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.090152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-config-data\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.099818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-combined-ca-bundle\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.108243 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-504c-account-create-update-xlrsl"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.124570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxzj\" (UniqueName: \"kubernetes.io/projected/dfacc23e-26c3-4308-ab42-371932e13246-kube-api-access-mwxzj\") pod \"barbican-7899-account-create-update-wzjrc\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.156483 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8fk\" (UniqueName: \"kubernetes.io/projected/00e6120a-8367-4154-81be-2f80bc318ac2-kube-api-access-lr8fk\") pod \"keystone-db-sync-x49h2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.161726 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x49h2" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.186511 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6562d200-2356-4f66-905c-27ac16f6bd68-operator-scripts\") pod \"cinder-504c-account-create-update-xlrsl\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.186940 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp955\" (UniqueName: \"kubernetes.io/projected/6562d200-2356-4f66-905c-27ac16f6bd68-kube-api-access-qp955\") pod \"cinder-504c-account-create-update-xlrsl\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.186990 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wln\" (UniqueName: \"kubernetes.io/projected/20dae312-edd5-4ad7-92ca-6c61465c9e5a-kube-api-access-s7wln\") pod \"neutron-db-create-fv9fn\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.187073 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20dae312-edd5-4ad7-92ca-6c61465c9e5a-operator-scripts\") pod \"neutron-db-create-fv9fn\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.245471 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jd72h" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.289313 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp955\" (UniqueName: \"kubernetes.io/projected/6562d200-2356-4f66-905c-27ac16f6bd68-kube-api-access-qp955\") pod \"cinder-504c-account-create-update-xlrsl\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.289388 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wln\" (UniqueName: \"kubernetes.io/projected/20dae312-edd5-4ad7-92ca-6c61465c9e5a-kube-api-access-s7wln\") pod \"neutron-db-create-fv9fn\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.289497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20dae312-edd5-4ad7-92ca-6c61465c9e5a-operator-scripts\") pod \"neutron-db-create-fv9fn\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.289622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6562d200-2356-4f66-905c-27ac16f6bd68-operator-scripts\") pod \"cinder-504c-account-create-update-xlrsl\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.290574 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6562d200-2356-4f66-905c-27ac16f6bd68-operator-scripts\") pod \"cinder-504c-account-create-update-xlrsl\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.291721 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20dae312-edd5-4ad7-92ca-6c61465c9e5a-operator-scripts\") pod \"neutron-db-create-fv9fn\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.329560 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.366928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wln\" (UniqueName: \"kubernetes.io/projected/20dae312-edd5-4ad7-92ca-6c61465c9e5a-kube-api-access-s7wln\") pod \"neutron-db-create-fv9fn\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.368371 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.376837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp955\" (UniqueName: \"kubernetes.io/projected/6562d200-2356-4f66-905c-27ac16f6bd68-kube-api-access-qp955\") pod \"cinder-504c-account-create-update-xlrsl\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.379044 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.384634 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1561-account-create-update-lmxjf"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.386082 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.393820 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1561-account-create-update-lmxjf"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.396521 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.496891 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ece2f30-4be3-4661-a2c8-bb5415023a2d-operator-scripts\") pod \"neutron-1561-account-create-update-lmxjf\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.496969 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6s8\" (UniqueName: \"kubernetes.io/projected/6ece2f30-4be3-4661-a2c8-bb5415023a2d-kube-api-access-9z6s8\") pod \"neutron-1561-account-create-update-lmxjf\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.582413 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9wtbv"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.594560 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9wtbv"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.598969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ece2f30-4be3-4661-a2c8-bb5415023a2d-operator-scripts\") pod \"neutron-1561-account-create-update-lmxjf\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.599069 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6s8\" (UniqueName: \"kubernetes.io/projected/6ece2f30-4be3-4661-a2c8-bb5415023a2d-kube-api-access-9z6s8\") pod \"neutron-1561-account-create-update-lmxjf\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.600529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ece2f30-4be3-4661-a2c8-bb5415023a2d-operator-scripts\") pod \"neutron-1561-account-create-update-lmxjf\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.629989 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pmgrr"] Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.633538 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6s8\" (UniqueName: \"kubernetes.io/projected/6ece2f30-4be3-4661-a2c8-bb5415023a2d-kube-api-access-9z6s8\") pod \"neutron-1561-account-create-update-lmxjf\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.728091 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:23 crc kubenswrapper[4823]: I0121 17:35:23.857326 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mhwlz"] Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.002175 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x49h2"] Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.013795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:24 crc kubenswrapper[4823]: E0121 17:35:24.013996 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 17:35:24 crc kubenswrapper[4823]: E0121 17:35:24.014032 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 17:35:24 crc kubenswrapper[4823]: E0121 17:35:24.014095 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift podName:1146f69b-d935-4a56-9f65-e96bf9539c14 nodeName:}" failed. No retries permitted until 2026-01-21 17:35:40.014077132 +0000 UTC m=+1140.940207992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift") pod "swift-storage-0" (UID: "1146f69b-d935-4a56-9f65-e96bf9539c14") : configmap "swift-ring-files" not found Jan 21 17:35:24 crc kubenswrapper[4823]: W0121 17:35:24.026924 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00e6120a_8367_4154_81be_2f80bc318ac2.slice/crio-e2014157e33cb989c42a729585530c9a2a57fe2c6fd27c8791fdcbc5ad95bfbe WatchSource:0}: Error finding container e2014157e33cb989c42a729585530c9a2a57fe2c6fd27c8791fdcbc5ad95bfbe: Status 404 returned error can't find the container with id e2014157e33cb989c42a729585530c9a2a57fe2c6fd27c8791fdcbc5ad95bfbe Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.174412 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-jd72h"] Jan 21 17:35:24 crc kubenswrapper[4823]: W0121 17:35:24.189508 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d800059_c35e_403a_a930_f1db60cf5c75.slice/crio-3bb64add4ee4c6bd8f60a94f493acb2b97dc8a69f06b16555f716c02a410982f WatchSource:0}: Error finding container 3bb64add4ee4c6bd8f60a94f493acb2b97dc8a69f06b16555f716c02a410982f: Status 404 returned error can't find the container with id 3bb64add4ee4c6bd8f60a94f493acb2b97dc8a69f06b16555f716c02a410982f Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.292531 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x49h2" event={"ID":"00e6120a-8367-4154-81be-2f80bc318ac2","Type":"ContainerStarted","Data":"e2014157e33cb989c42a729585530c9a2a57fe2c6fd27c8791fdcbc5ad95bfbe"} Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.299032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmgrr" event={"ID":"e8dc75f0-c0f1-456e-912b-221ee7b6697c","Type":"ContainerStarted","Data":"654e65407bc10457cd615d1fea8a4a841d47c22461612042693420741fbc9978"} Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.299090 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmgrr" event={"ID":"e8dc75f0-c0f1-456e-912b-221ee7b6697c","Type":"ContainerStarted","Data":"7e8b0ee8a46f9171c6dccca3f0a5d386060b63750ba206366c581e77e1776f1e"} Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.314099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jd72h" event={"ID":"0d800059-c35e-403a-a930-f1db60cf5c75","Type":"ContainerStarted","Data":"3bb64add4ee4c6bd8f60a94f493acb2b97dc8a69f06b16555f716c02a410982f"} Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.325786 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhwlz" event={"ID":"4212f0db-bdbb-4cb0-87c0-d959e974c5a0","Type":"ContainerStarted","Data":"6382a311af50e055cb090b756e3b5bdb972b7be0beb02fe03d50b1861c0b8517"} Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.325883 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhwlz" event={"ID":"4212f0db-bdbb-4cb0-87c0-d959e974c5a0","Type":"ContainerStarted","Data":"a9de8e5ffb22a179b99a81d4e15b142ee335d2514f69dcfad6f5244f838f7908"} Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.332164 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7899-account-create-update-wzjrc"] Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.343883 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-pmgrr" podStartSLOduration=2.343860809 podStartE2EDuration="2.343860809s" podCreationTimestamp="2026-01-21 17:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:24.325523186 +0000 UTC m=+1125.251654046" watchObservedRunningTime="2026-01-21 17:35:24.343860809 +0000 UTC m=+1125.269991669" Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.366694 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fv9fn"] Jan 21 17:35:24 crc kubenswrapper[4823]: W0121 17:35:24.399691 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20dae312_edd5_4ad7_92ca_6c61465c9e5a.slice/crio-52ca72116a3827809c6bbcaffe0c4fae49d105bcf940e165936820c779889e29 WatchSource:0}: Error finding container 52ca72116a3827809c6bbcaffe0c4fae49d105bcf940e165936820c779889e29: Status 404 returned error can't find the container with id 52ca72116a3827809c6bbcaffe0c4fae49d105bcf940e165936820c779889e29 Jan 21 17:35:24 crc kubenswrapper[4823]: I0121 17:35:24.413037 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-mhwlz" podStartSLOduration=2.412999127 podStartE2EDuration="2.412999127s" podCreationTimestamp="2026-01-21 17:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:35:24.353427185 +0000 UTC m=+1125.279558055" watchObservedRunningTime="2026-01-21 17:35:24.412999127 +0000 UTC m=+1125.339129987" Jan 21 17:35:24 crc kubenswrapper[4823]: E0121 17:35:24.903889 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4212f0db_bdbb_4cb0_87c0_d959e974c5a0.slice/crio-conmon-6382a311af50e055cb090b756e3b5bdb972b7be0beb02fe03d50b1861c0b8517.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.188176 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bfdrb" podUID="9588aa19-b204-450e-a781-2b3d119bd86e" containerName="ovn-controller" probeResult="failure" output=< Jan 21 17:35:25 crc kubenswrapper[4823]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 17:35:25 crc kubenswrapper[4823]: > Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.196811 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-504c-account-create-update-xlrsl"] Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.196953 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.229753 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1561-account-create-update-lmxjf"] Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.263708 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ht698" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.367025 4823 generic.go:334] "Generic (PLEG): container finished" podID="20dae312-edd5-4ad7-92ca-6c61465c9e5a" containerID="1bd342e50081eaa1f183f6425035fd9f365aba092f24ca18e9b16e5525e0f073" exitCode=0 Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.380574 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" containerID="c755567505367d945d19a184e6429b8f8c910b8cb650a1d094a870dce26c74a0" exitCode=0 Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.383263 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfacc23e-26c3-4308-ab42-371932e13246" containerID="7abe6528fff9918ec08a56a0eb3697b2c968668f8e65e52a776f071bad138ff6" exitCode=0 Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.389778 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268d9f8b-8ddf-41a8-8d84-9d948ba3609b" path="/var/lib/kubelet/pods/268d9f8b-8ddf-41a8-8d84-9d948ba3609b/volumes" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1561-account-create-update-lmxjf" event={"ID":"6ece2f30-4be3-4661-a2c8-bb5415023a2d","Type":"ContainerStarted","Data":"d95a6e012e4bfeb9eb4e08849a28c9ffb5debc2d725f3a2136bdf7231f361a4b"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390569 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-504c-account-create-update-xlrsl" event={"ID":"6562d200-2356-4f66-905c-27ac16f6bd68","Type":"ContainerStarted","Data":"b655c80dff3a66ade1b4efa0980412174634dca2461a06d5ac00381728be2fc0"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390586 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fv9fn" event={"ID":"20dae312-edd5-4ad7-92ca-6c61465c9e5a","Type":"ContainerDied","Data":"1bd342e50081eaa1f183f6425035fd9f365aba092f24ca18e9b16e5525e0f073"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390604 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fv9fn" event={"ID":"20dae312-edd5-4ad7-92ca-6c61465c9e5a","Type":"ContainerStarted","Data":"52ca72116a3827809c6bbcaffe0c4fae49d105bcf940e165936820c779889e29"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2wtt" event={"ID":"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43","Type":"ContainerDied","Data":"c755567505367d945d19a184e6429b8f8c910b8cb650a1d094a870dce26c74a0"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390629 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7899-account-create-update-wzjrc" event={"ID":"dfacc23e-26c3-4308-ab42-371932e13246","Type":"ContainerDied","Data":"7abe6528fff9918ec08a56a0eb3697b2c968668f8e65e52a776f071bad138ff6"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.390645 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7899-account-create-update-wzjrc" event={"ID":"dfacc23e-26c3-4308-ab42-371932e13246","Type":"ContainerStarted","Data":"0dc50782956b4dfadab231cabbf2b35840ce3c9b3bae254a0a6552b85792454f"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.391388 4823 generic.go:334] "Generic (PLEG): container finished" podID="e8dc75f0-c0f1-456e-912b-221ee7b6697c" containerID="654e65407bc10457cd615d1fea8a4a841d47c22461612042693420741fbc9978" exitCode=0 Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.391479 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmgrr" event={"ID":"e8dc75f0-c0f1-456e-912b-221ee7b6697c","Type":"ContainerDied","Data":"654e65407bc10457cd615d1fea8a4a841d47c22461612042693420741fbc9978"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.418101 4823 generic.go:334] "Generic (PLEG): container finished" podID="4212f0db-bdbb-4cb0-87c0-d959e974c5a0" containerID="6382a311af50e055cb090b756e3b5bdb972b7be0beb02fe03d50b1861c0b8517" exitCode=0 Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.418253 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhwlz" event={"ID":"4212f0db-bdbb-4cb0-87c0-d959e974c5a0","Type":"ContainerDied","Data":"6382a311af50e055cb090b756e3b5bdb972b7be0beb02fe03d50b1861c0b8517"} Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.541402 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bfdrb-config-p2j9w"] Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.542966 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.552869 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.555839 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bfdrb-config-p2j9w"] Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.677031 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-additional-scripts\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.679985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-log-ovn\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.680063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run-ovn\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.680285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnkk\" (UniqueName: \"kubernetes.io/projected/99f7192f-41dc-485d-9100-cc38e9563823-kube-api-access-vlnkk\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.680327 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-scripts\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.680506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.782417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-additional-scripts\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.782473 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-log-ovn\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.782520 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run-ovn\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.782601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlnkk\" (UniqueName: \"kubernetes.io/projected/99f7192f-41dc-485d-9100-cc38e9563823-kube-api-access-vlnkk\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.782631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-scripts\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.782700 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.783170 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run-ovn\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.783243 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-log-ovn\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.783399 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.784108 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-additional-scripts\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.786324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-scripts\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.815199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlnkk\" (UniqueName: \"kubernetes.io/projected/99f7192f-41dc-485d-9100-cc38e9563823-kube-api-access-vlnkk\") pod \"ovn-controller-bfdrb-config-p2j9w\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:25 crc kubenswrapper[4823]: I0121 17:35:25.908640 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.437542 4823 generic.go:334] "Generic (PLEG): container finished" podID="6ece2f30-4be3-4661-a2c8-bb5415023a2d" containerID="a308db27616df6948b4640fadcdd3d0792a51fe176ce6e60abd6f38b2f6c646a" exitCode=0 Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.438082 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1561-account-create-update-lmxjf" event={"ID":"6ece2f30-4be3-4661-a2c8-bb5415023a2d","Type":"ContainerDied","Data":"a308db27616df6948b4640fadcdd3d0792a51fe176ce6e60abd6f38b2f6c646a"} Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.442227 4823 generic.go:334] "Generic (PLEG): container finished" podID="6562d200-2356-4f66-905c-27ac16f6bd68" containerID="d21aa915f20d7bd15a8b3ca028a73aa53ded7b8f9ed8b2db5bf48955e3f9c65a" exitCode=0 Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.442605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-504c-account-create-update-xlrsl" event={"ID":"6562d200-2356-4f66-905c-27ac16f6bd68","Type":"ContainerDied","Data":"d21aa915f20d7bd15a8b3ca028a73aa53ded7b8f9ed8b2db5bf48955e3f9c65a"} Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.545774 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bfdrb-config-p2j9w"] Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.770623 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908045 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-ring-data-devices\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-etc-swift\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908162 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzfxg\" (UniqueName: \"kubernetes.io/projected/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-kube-api-access-fzfxg\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908203 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-dispersionconf\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908226 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-scripts\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908298 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-combined-ca-bundle\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.908405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-swiftconf\") pod \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\" (UID: \"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43\") " Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.909492 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.911133 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.921262 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-kube-api-access-fzfxg" (OuterVolumeSpecName: "kube-api-access-fzfxg") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "kube-api-access-fzfxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.954765 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.955097 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.984426 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-scripts" (OuterVolumeSpecName: "scripts") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:26 crc kubenswrapper[4823]: I0121 17:35:26.994219 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" (UID: "dfea14ef-6f13-4b56-99c5-74c8bb2d5e43"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010912 4823 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010942 4823 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010952 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzfxg\" (UniqueName: \"kubernetes.io/projected/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-kube-api-access-fzfxg\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010962 4823 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010971 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010979 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.010986 4823 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dfea14ef-6f13-4b56-99c5-74c8bb2d5e43-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.295430 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.303169 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.421145 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj9mq\" (UniqueName: \"kubernetes.io/projected/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-kube-api-access-qj9mq\") pod \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.421701 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8dc75f0-c0f1-456e-912b-221ee7b6697c-operator-scripts\") pod \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.421735 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-operator-scripts\") pod \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\" (UID: \"4212f0db-bdbb-4cb0-87c0-d959e974c5a0\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.422800 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4212f0db-bdbb-4cb0-87c0-d959e974c5a0" (UID: "4212f0db-bdbb-4cb0-87c0-d959e974c5a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.424562 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8dc75f0-c0f1-456e-912b-221ee7b6697c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8dc75f0-c0f1-456e-912b-221ee7b6697c" (UID: "e8dc75f0-c0f1-456e-912b-221ee7b6697c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.425629 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnc5v\" (UniqueName: \"kubernetes.io/projected/e8dc75f0-c0f1-456e-912b-221ee7b6697c-kube-api-access-jnc5v\") pod \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\" (UID: \"e8dc75f0-c0f1-456e-912b-221ee7b6697c\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.427073 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8dc75f0-c0f1-456e-912b-221ee7b6697c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.427097 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.438176 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-kube-api-access-qj9mq" (OuterVolumeSpecName: "kube-api-access-qj9mq") pod "4212f0db-bdbb-4cb0-87c0-d959e974c5a0" (UID: "4212f0db-bdbb-4cb0-87c0-d959e974c5a0"). InnerVolumeSpecName "kube-api-access-qj9mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.443798 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8dc75f0-c0f1-456e-912b-221ee7b6697c-kube-api-access-jnc5v" (OuterVolumeSpecName: "kube-api-access-jnc5v") pod "e8dc75f0-c0f1-456e-912b-221ee7b6697c" (UID: "e8dc75f0-c0f1-456e-912b-221ee7b6697c"). InnerVolumeSpecName "kube-api-access-jnc5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.464978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhwlz" event={"ID":"4212f0db-bdbb-4cb0-87c0-d959e974c5a0","Type":"ContainerDied","Data":"a9de8e5ffb22a179b99a81d4e15b142ee335d2514f69dcfad6f5244f838f7908"} Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.465029 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9de8e5ffb22a179b99a81d4e15b142ee335d2514f69dcfad6f5244f838f7908" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.465155 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhwlz" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.468916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb-config-p2j9w" event={"ID":"99f7192f-41dc-485d-9100-cc38e9563823","Type":"ContainerStarted","Data":"6189f010250fce5b49641c19e72e5343703face2374b93c8475e39291f7b8910"} Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.486225 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2wtt" event={"ID":"dfea14ef-6f13-4b56-99c5-74c8bb2d5e43","Type":"ContainerDied","Data":"f3f520c718abbcceec97328567be941e3e662a1016ef7a61d7f9ccf7288bb9a3"} Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.486295 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f520c718abbcceec97328567be941e3e662a1016ef7a61d7f9ccf7288bb9a3" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.486335 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2wtt" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.488184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmgrr" event={"ID":"e8dc75f0-c0f1-456e-912b-221ee7b6697c","Type":"ContainerDied","Data":"7e8b0ee8a46f9171c6dccca3f0a5d386060b63750ba206366c581e77e1776f1e"} Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.488221 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8b0ee8a46f9171c6dccca3f0a5d386060b63750ba206366c581e77e1776f1e" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.488192 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmgrr" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.528960 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnc5v\" (UniqueName: \"kubernetes.io/projected/e8dc75f0-c0f1-456e-912b-221ee7b6697c-kube-api-access-jnc5v\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.529003 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj9mq\" (UniqueName: \"kubernetes.io/projected/4212f0db-bdbb-4cb0-87c0-d959e974c5a0-kube-api-access-qj9mq\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.580360 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.585018 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.733156 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfacc23e-26c3-4308-ab42-371932e13246-operator-scripts\") pod \"dfacc23e-26c3-4308-ab42-371932e13246\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.733334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxzj\" (UniqueName: \"kubernetes.io/projected/dfacc23e-26c3-4308-ab42-371932e13246-kube-api-access-mwxzj\") pod \"dfacc23e-26c3-4308-ab42-371932e13246\" (UID: \"dfacc23e-26c3-4308-ab42-371932e13246\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.733619 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wln\" (UniqueName: \"kubernetes.io/projected/20dae312-edd5-4ad7-92ca-6c61465c9e5a-kube-api-access-s7wln\") pod \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.733669 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20dae312-edd5-4ad7-92ca-6c61465c9e5a-operator-scripts\") pod \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\" (UID: \"20dae312-edd5-4ad7-92ca-6c61465c9e5a\") " Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.733878 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfacc23e-26c3-4308-ab42-371932e13246-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfacc23e-26c3-4308-ab42-371932e13246" (UID: "dfacc23e-26c3-4308-ab42-371932e13246"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.734378 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dae312-edd5-4ad7-92ca-6c61465c9e5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20dae312-edd5-4ad7-92ca-6c61465c9e5a" (UID: "20dae312-edd5-4ad7-92ca-6c61465c9e5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.734499 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20dae312-edd5-4ad7-92ca-6c61465c9e5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.734518 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfacc23e-26c3-4308-ab42-371932e13246-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.738257 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20dae312-edd5-4ad7-92ca-6c61465c9e5a-kube-api-access-s7wln" (OuterVolumeSpecName: "kube-api-access-s7wln") pod "20dae312-edd5-4ad7-92ca-6c61465c9e5a" (UID: "20dae312-edd5-4ad7-92ca-6c61465c9e5a"). InnerVolumeSpecName "kube-api-access-s7wln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.738799 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfacc23e-26c3-4308-ab42-371932e13246-kube-api-access-mwxzj" (OuterVolumeSpecName: "kube-api-access-mwxzj") pod "dfacc23e-26c3-4308-ab42-371932e13246" (UID: "dfacc23e-26c3-4308-ab42-371932e13246"). InnerVolumeSpecName "kube-api-access-mwxzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.837220 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wln\" (UniqueName: \"kubernetes.io/projected/20dae312-edd5-4ad7-92ca-6c61465c9e5a-kube-api-access-s7wln\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.837293 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxzj\" (UniqueName: \"kubernetes.io/projected/dfacc23e-26c3-4308-ab42-371932e13246-kube-api-access-mwxzj\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:27 crc kubenswrapper[4823]: I0121 17:35:27.976906 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.040317 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6562d200-2356-4f66-905c-27ac16f6bd68-operator-scripts\") pod \"6562d200-2356-4f66-905c-27ac16f6bd68\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.041056 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp955\" (UniqueName: \"kubernetes.io/projected/6562d200-2356-4f66-905c-27ac16f6bd68-kube-api-access-qp955\") pod \"6562d200-2356-4f66-905c-27ac16f6bd68\" (UID: \"6562d200-2356-4f66-905c-27ac16f6bd68\") " Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.040823 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6562d200-2356-4f66-905c-27ac16f6bd68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6562d200-2356-4f66-905c-27ac16f6bd68" (UID: "6562d200-2356-4f66-905c-27ac16f6bd68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.049178 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6562d200-2356-4f66-905c-27ac16f6bd68-kube-api-access-qp955" (OuterVolumeSpecName: "kube-api-access-qp955") pod "6562d200-2356-4f66-905c-27ac16f6bd68" (UID: "6562d200-2356-4f66-905c-27ac16f6bd68"). InnerVolumeSpecName "kube-api-access-qp955". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.049490 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6562d200-2356-4f66-905c-27ac16f6bd68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.111511 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.152914 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp955\" (UniqueName: \"kubernetes.io/projected/6562d200-2356-4f66-905c-27ac16f6bd68-kube-api-access-qp955\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.254098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ece2f30-4be3-4661-a2c8-bb5415023a2d-operator-scripts\") pod \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.254214 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6s8\" (UniqueName: \"kubernetes.io/projected/6ece2f30-4be3-4661-a2c8-bb5415023a2d-kube-api-access-9z6s8\") pod \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\" (UID: \"6ece2f30-4be3-4661-a2c8-bb5415023a2d\") " Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.255018 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ece2f30-4be3-4661-a2c8-bb5415023a2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ece2f30-4be3-4661-a2c8-bb5415023a2d" (UID: "6ece2f30-4be3-4661-a2c8-bb5415023a2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.257611 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ece2f30-4be3-4661-a2c8-bb5415023a2d-kube-api-access-9z6s8" (OuterVolumeSpecName: "kube-api-access-9z6s8") pod "6ece2f30-4be3-4661-a2c8-bb5415023a2d" (UID: "6ece2f30-4be3-4661-a2c8-bb5415023a2d"). InnerVolumeSpecName "kube-api-access-9z6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.356694 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z6s8\" (UniqueName: \"kubernetes.io/projected/6ece2f30-4be3-4661-a2c8-bb5415023a2d-kube-api-access-9z6s8\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.356743 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ece2f30-4be3-4661-a2c8-bb5415023a2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.513988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7899-account-create-update-wzjrc" event={"ID":"dfacc23e-26c3-4308-ab42-371932e13246","Type":"ContainerDied","Data":"0dc50782956b4dfadab231cabbf2b35840ce3c9b3bae254a0a6552b85792454f"} Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.514038 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc50782956b4dfadab231cabbf2b35840ce3c9b3bae254a0a6552b85792454f" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.514138 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7899-account-create-update-wzjrc" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.517564 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1561-account-create-update-lmxjf" event={"ID":"6ece2f30-4be3-4661-a2c8-bb5415023a2d","Type":"ContainerDied","Data":"d95a6e012e4bfeb9eb4e08849a28c9ffb5debc2d725f3a2136bdf7231f361a4b"} Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.517641 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95a6e012e4bfeb9eb4e08849a28c9ffb5debc2d725f3a2136bdf7231f361a4b" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.517601 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1561-account-create-update-lmxjf" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.519801 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-504c-account-create-update-xlrsl" event={"ID":"6562d200-2356-4f66-905c-27ac16f6bd68","Type":"ContainerDied","Data":"b655c80dff3a66ade1b4efa0980412174634dca2461a06d5ac00381728be2fc0"} Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.519832 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b655c80dff3a66ade1b4efa0980412174634dca2461a06d5ac00381728be2fc0" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.519969 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-504c-account-create-update-xlrsl" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.524563 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fv9fn" event={"ID":"20dae312-edd5-4ad7-92ca-6c61465c9e5a","Type":"ContainerDied","Data":"52ca72116a3827809c6bbcaffe0c4fae49d105bcf940e165936820c779889e29"} Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.524604 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52ca72116a3827809c6bbcaffe0c4fae49d105bcf940e165936820c779889e29" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.524685 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fv9fn" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.745081 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p7g6z"] Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.745884 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" containerName="swift-ring-rebalance" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.745899 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" containerName="swift-ring-rebalance" Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.745922 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4212f0db-bdbb-4cb0-87c0-d959e974c5a0" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.745928 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4212f0db-bdbb-4cb0-87c0-d959e974c5a0" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.745943 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dc75f0-c0f1-456e-912b-221ee7b6697c" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.745950 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dc75f0-c0f1-456e-912b-221ee7b6697c" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.745962 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6562d200-2356-4f66-905c-27ac16f6bd68" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.745968 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6562d200-2356-4f66-905c-27ac16f6bd68" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.745976 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ece2f30-4be3-4661-a2c8-bb5415023a2d" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.745983 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ece2f30-4be3-4661-a2c8-bb5415023a2d" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.745997 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfacc23e-26c3-4308-ab42-371932e13246" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746003 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfacc23e-26c3-4308-ab42-371932e13246" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: E0121 17:35:28.746016 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20dae312-edd5-4ad7-92ca-6c61465c9e5a" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746022 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="20dae312-edd5-4ad7-92ca-6c61465c9e5a" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746185 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8dc75f0-c0f1-456e-912b-221ee7b6697c" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746196 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6562d200-2356-4f66-905c-27ac16f6bd68" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746206 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfea14ef-6f13-4b56-99c5-74c8bb2d5e43" containerName="swift-ring-rebalance" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746219 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ece2f30-4be3-4661-a2c8-bb5415023a2d" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746227 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfacc23e-26c3-4308-ab42-371932e13246" containerName="mariadb-account-create-update" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746239 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4212f0db-bdbb-4cb0-87c0-d959e974c5a0" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.746248 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="20dae312-edd5-4ad7-92ca-6c61465c9e5a" containerName="mariadb-database-create" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.747135 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.750837 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.763698 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p7g6z"] Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.890834 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pl4z\" (UniqueName: \"kubernetes.io/projected/233f4f9e-6a9f-4da5-8698-45692fb68176-kube-api-access-9pl4z\") pod \"root-account-create-update-p7g6z\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:28 crc kubenswrapper[4823]: I0121 17:35:28.892708 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/233f4f9e-6a9f-4da5-8698-45692fb68176-operator-scripts\") pod \"root-account-create-update-p7g6z\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.003055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/233f4f9e-6a9f-4da5-8698-45692fb68176-operator-scripts\") pod \"root-account-create-update-p7g6z\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.003183 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pl4z\" (UniqueName: \"kubernetes.io/projected/233f4f9e-6a9f-4da5-8698-45692fb68176-kube-api-access-9pl4z\") pod \"root-account-create-update-p7g6z\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.005052 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/233f4f9e-6a9f-4da5-8698-45692fb68176-operator-scripts\") pod \"root-account-create-update-p7g6z\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.042387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pl4z\" (UniqueName: \"kubernetes.io/projected/233f4f9e-6a9f-4da5-8698-45692fb68176-kube-api-access-9pl4z\") pod \"root-account-create-update-p7g6z\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.075291 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p7g6z" Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.535308 4823 generic.go:334] "Generic (PLEG): container finished" podID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerID="65c57cc2faa2053e3831057252afd0bd6a79afe4474e02a62ed8e48d75eeb9bb" exitCode=0 Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.535371 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerDied","Data":"65c57cc2faa2053e3831057252afd0bd6a79afe4474e02a62ed8e48d75eeb9bb"} Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.539181 4823 generic.go:334] "Generic (PLEG): container finished" podID="99f7192f-41dc-485d-9100-cc38e9563823" containerID="faf3fdf785e5f9cf0f48eafcebd0b57c71ee2de9c24b29983eca1c223cd0f097" exitCode=0 Jan 21 17:35:29 crc kubenswrapper[4823]: I0121 17:35:29.539215 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb-config-p2j9w" event={"ID":"99f7192f-41dc-485d-9100-cc38e9563823","Type":"ContainerDied","Data":"faf3fdf785e5f9cf0f48eafcebd0b57c71ee2de9c24b29983eca1c223cd0f097"} Jan 21 17:35:30 crc kubenswrapper[4823]: I0121 17:35:30.133977 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bfdrb" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.538671 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.622382 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb-config-p2j9w" event={"ID":"99f7192f-41dc-485d-9100-cc38e9563823","Type":"ContainerDied","Data":"6189f010250fce5b49641c19e72e5343703face2374b93c8475e39291f7b8910"} Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.622430 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6189f010250fce5b49641c19e72e5343703face2374b93c8475e39291f7b8910" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.622448 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-p2j9w" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.702712 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-additional-scripts\") pod \"99f7192f-41dc-485d-9100-cc38e9563823\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704014 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "99f7192f-41dc-485d-9100-cc38e9563823" (UID: "99f7192f-41dc-485d-9100-cc38e9563823"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704145 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run\") pod \"99f7192f-41dc-485d-9100-cc38e9563823\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704217 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-log-ovn\") pod \"99f7192f-41dc-485d-9100-cc38e9563823\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704271 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run-ovn\") pod \"99f7192f-41dc-485d-9100-cc38e9563823\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704313 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "99f7192f-41dc-485d-9100-cc38e9563823" (UID: "99f7192f-41dc-485d-9100-cc38e9563823"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704322 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run" (OuterVolumeSpecName: "var-run") pod "99f7192f-41dc-485d-9100-cc38e9563823" (UID: "99f7192f-41dc-485d-9100-cc38e9563823"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704395 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "99f7192f-41dc-485d-9100-cc38e9563823" (UID: "99f7192f-41dc-485d-9100-cc38e9563823"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlnkk\" (UniqueName: \"kubernetes.io/projected/99f7192f-41dc-485d-9100-cc38e9563823-kube-api-access-vlnkk\") pod \"99f7192f-41dc-485d-9100-cc38e9563823\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.704503 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-scripts\") pod \"99f7192f-41dc-485d-9100-cc38e9563823\" (UID: \"99f7192f-41dc-485d-9100-cc38e9563823\") " Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.705021 4823 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.705035 4823 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.705044 4823 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99f7192f-41dc-485d-9100-cc38e9563823-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.705055 4823 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.705966 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-scripts" (OuterVolumeSpecName: "scripts") pod "99f7192f-41dc-485d-9100-cc38e9563823" (UID: "99f7192f-41dc-485d-9100-cc38e9563823"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.710632 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f7192f-41dc-485d-9100-cc38e9563823-kube-api-access-vlnkk" (OuterVolumeSpecName: "kube-api-access-vlnkk") pod "99f7192f-41dc-485d-9100-cc38e9563823" (UID: "99f7192f-41dc-485d-9100-cc38e9563823"). InnerVolumeSpecName "kube-api-access-vlnkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.805684 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlnkk\" (UniqueName: \"kubernetes.io/projected/99f7192f-41dc-485d-9100-cc38e9563823-kube-api-access-vlnkk\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:37 crc kubenswrapper[4823]: I0121 17:35:37.805721 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f7192f-41dc-485d-9100-cc38e9563823-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.659353 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bfdrb-config-p2j9w"] Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.667444 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bfdrb-config-p2j9w"] Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.790787 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bfdrb-config-q49j6"] Jan 21 17:35:38 crc kubenswrapper[4823]: E0121 17:35:38.791217 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f7192f-41dc-485d-9100-cc38e9563823" containerName="ovn-config" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.791232 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f7192f-41dc-485d-9100-cc38e9563823" containerName="ovn-config" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.791414 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f7192f-41dc-485d-9100-cc38e9563823" containerName="ovn-config" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.792055 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.795255 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.824977 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-log-ovn\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.825059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-scripts\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.825092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.825164 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run-ovn\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.825244 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-additional-scripts\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.825269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sfm\" (UniqueName: \"kubernetes.io/projected/50d22352-eeec-4772-b285-df0472ad4f67-kube-api-access-d2sfm\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.829485 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bfdrb-config-q49j6"] Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-additional-scripts\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sfm\" (UniqueName: \"kubernetes.io/projected/50d22352-eeec-4772-b285-df0472ad4f67-kube-api-access-d2sfm\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-log-ovn\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927408 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-scripts\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927433 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927500 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run-ovn\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927872 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run-ovn\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927939 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.927982 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-log-ovn\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.928561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-additional-scripts\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.931501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-scripts\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:38 crc kubenswrapper[4823]: I0121 17:35:38.967080 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sfm\" (UniqueName: \"kubernetes.io/projected/50d22352-eeec-4772-b285-df0472ad4f67-kube-api-access-d2sfm\") pod \"ovn-controller-bfdrb-config-q49j6\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:39 crc kubenswrapper[4823]: I0121 17:35:39.117271 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:35:39 crc kubenswrapper[4823]: I0121 17:35:39.362108 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f7192f-41dc-485d-9100-cc38e9563823" path="/var/lib/kubelet/pods/99f7192f-41dc-485d-9100-cc38e9563823/volumes" Jan 21 17:35:40 crc kubenswrapper[4823]: I0121 17:35:40.046655 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:40 crc kubenswrapper[4823]: I0121 17:35:40.053547 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1146f69b-d935-4a56-9f65-e96bf9539c14-etc-swift\") pod \"swift-storage-0\" (UID: \"1146f69b-d935-4a56-9f65-e96bf9539c14\") " pod="openstack/swift-storage-0" Jan 21 17:35:40 crc kubenswrapper[4823]: I0121 17:35:40.206942 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 17:35:45 crc kubenswrapper[4823]: I0121 17:35:45.071635 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:35:45 crc kubenswrapper[4823]: I0121 17:35:45.072615 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:35:47 crc kubenswrapper[4823]: E0121 17:35:47.087068 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 17:35:47 crc kubenswrapper[4823]: E0121 17:35:47.087833 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmclg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-qk9xv_openstack(8285d29e-4f51-4b0c-8dd9-c613317c933c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:35:47 crc kubenswrapper[4823]: E0121 17:35:47.089180 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-qk9xv" podUID="8285d29e-4f51-4b0c-8dd9-c613317c933c" Jan 21 17:35:47 crc kubenswrapper[4823]: E0121 17:35:47.721733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-qk9xv" podUID="8285d29e-4f51-4b0c-8dd9-c613317c933c" Jan 21 17:35:52 crc kubenswrapper[4823]: E0121 17:35:52.958013 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Jan 21 17:35:52 crc kubenswrapper[4823]: E0121 17:35:52.958821 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qshlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(9c7540d6-15ae-4931-b89b-2c9c0429b86a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:35:53 crc kubenswrapper[4823]: I0121 17:35:53.402979 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p7g6z"] Jan 21 17:35:59 crc kubenswrapper[4823]: W0121 17:35:59.780546 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod233f4f9e_6a9f_4da5_8698_45692fb68176.slice/crio-057c114fc409a47f9b8d9b79c7fb3acfa3e9f07bb7e7ca414958c000e99d5632 WatchSource:0}: Error finding container 057c114fc409a47f9b8d9b79c7fb3acfa3e9f07bb7e7ca414958c000e99d5632: Status 404 returned error can't find the container with id 057c114fc409a47f9b8d9b79c7fb3acfa3e9f07bb7e7ca414958c000e99d5632 Jan 21 17:35:59 crc kubenswrapper[4823]: I0121 17:35:59.804077 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 17:35:59 crc kubenswrapper[4823]: E0121 17:35:59.818971 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.128:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 21 17:35:59 crc kubenswrapper[4823]: E0121 17:35:59.819082 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.128:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 21 17:35:59 crc kubenswrapper[4823]: E0121 17:35:59.819312 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.128:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlfnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-jd72h_openstack(0d800059-c35e-403a-a930-f1db60cf5c75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:35:59 crc kubenswrapper[4823]: E0121 17:35:59.820969 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-jd72h" podUID="0d800059-c35e-403a-a930-f1db60cf5c75" Jan 21 17:35:59 crc kubenswrapper[4823]: I0121 17:35:59.846956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p7g6z" event={"ID":"233f4f9e-6a9f-4da5-8698-45692fb68176","Type":"ContainerStarted","Data":"057c114fc409a47f9b8d9b79c7fb3acfa3e9f07bb7e7ca414958c000e99d5632"} Jan 21 17:35:59 crc kubenswrapper[4823]: E0121 17:35:59.854113 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.128:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-jd72h" podUID="0d800059-c35e-403a-a930-f1db60cf5c75" Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.247869 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bfdrb-config-q49j6"] Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.496652 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 17:36:00 crc kubenswrapper[4823]: W0121 17:36:00.538361 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1146f69b_d935_4a56_9f65_e96bf9539c14.slice/crio-4a4322ff26b72d055ad42f3f11ebd1e0f08ebb582fd578e74f2fee002c5dd7d5 WatchSource:0}: Error finding container 4a4322ff26b72d055ad42f3f11ebd1e0f08ebb582fd578e74f2fee002c5dd7d5: Status 404 returned error can't find the container with id 4a4322ff26b72d055ad42f3f11ebd1e0f08ebb582fd578e74f2fee002c5dd7d5 Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.860294 4823 generic.go:334] "Generic (PLEG): container finished" podID="233f4f9e-6a9f-4da5-8698-45692fb68176" containerID="d43b9c5e7f373fe44cbd3f153ff197fc5273f0cc3a88c46e7c853827455b77a2" exitCode=0 Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.860370 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p7g6z" event={"ID":"233f4f9e-6a9f-4da5-8698-45692fb68176","Type":"ContainerDied","Data":"d43b9c5e7f373fe44cbd3f153ff197fc5273f0cc3a88c46e7c853827455b77a2"} Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.862693 4823 generic.go:334] "Generic (PLEG): container finished" podID="50d22352-eeec-4772-b285-df0472ad4f67" containerID="830e567e88aec7e8ac41bd87adef1f7be90d6a4918f6692c669789a28770e331" exitCode=0 Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.862777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb-config-q49j6" event={"ID":"50d22352-eeec-4772-b285-df0472ad4f67","Type":"ContainerDied","Data":"830e567e88aec7e8ac41bd87adef1f7be90d6a4918f6692c669789a28770e331"} Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.862824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb-config-q49j6" event={"ID":"50d22352-eeec-4772-b285-df0472ad4f67","Type":"ContainerStarted","Data":"f2b2ccacb65c66fd39fbeb460474d195c8f87f7435d41443b5b01eaf56aa56eb"} Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.864673 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x49h2" event={"ID":"00e6120a-8367-4154-81be-2f80bc318ac2","Type":"ContainerStarted","Data":"220057f3df1fdb09592a246474dbe37f53716ffa21a02eff738acac1cdaf208d"} Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.866835 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qk9xv" event={"ID":"8285d29e-4f51-4b0c-8dd9-c613317c933c","Type":"ContainerStarted","Data":"140da72e9910ec786c5983426796e28e700ffc14c372085b180c78919159b4e3"} Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.868360 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"4a4322ff26b72d055ad42f3f11ebd1e0f08ebb582fd578e74f2fee002c5dd7d5"} Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.917025 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x49h2" podStartSLOduration=3.165710891 podStartE2EDuration="38.917000361s" podCreationTimestamp="2026-01-21 17:35:22 +0000 UTC" firstStartedPulling="2026-01-21 17:35:24.03302607 +0000 UTC m=+1124.959156930" lastFinishedPulling="2026-01-21 17:35:59.78431554 +0000 UTC m=+1160.710446400" observedRunningTime="2026-01-21 17:36:00.907711761 +0000 UTC m=+1161.833842621" watchObservedRunningTime="2026-01-21 17:36:00.917000361 +0000 UTC m=+1161.843131221" Jan 21 17:36:00 crc kubenswrapper[4823]: I0121 17:36:00.931636 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qk9xv" podStartSLOduration=2.143116022 podStartE2EDuration="40.931616632s" podCreationTimestamp="2026-01-21 17:35:20 +0000 UTC" firstStartedPulling="2026-01-21 17:35:21.103753177 +0000 UTC m=+1122.029884037" lastFinishedPulling="2026-01-21 17:35:59.892253777 +0000 UTC m=+1160.818384647" observedRunningTime="2026-01-21 17:36:00.927384557 +0000 UTC m=+1161.853515427" watchObservedRunningTime="2026-01-21 17:36:00.931616632 +0000 UTC m=+1161.857747482" Jan 21 17:36:01 crc kubenswrapper[4823]: I0121 17:36:01.879415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"e576d4b7a8475b11f40d65890e4eb50a669ec4c939d779a98ee97ba018d730ee"} Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.817149 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p7g6z" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.827125 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.893622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p7g6z" event={"ID":"233f4f9e-6a9f-4da5-8698-45692fb68176","Type":"ContainerDied","Data":"057c114fc409a47f9b8d9b79c7fb3acfa3e9f07bb7e7ca414958c000e99d5632"} Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.893703 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057c114fc409a47f9b8d9b79c7fb3acfa3e9f07bb7e7ca414958c000e99d5632" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.893714 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p7g6z" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.896364 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bfdrb-config-q49j6" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.896352 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bfdrb-config-q49j6" event={"ID":"50d22352-eeec-4772-b285-df0472ad4f67","Type":"ContainerDied","Data":"f2b2ccacb65c66fd39fbeb460474d195c8f87f7435d41443b5b01eaf56aa56eb"} Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.896484 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b2ccacb65c66fd39fbeb460474d195c8f87f7435d41443b5b01eaf56aa56eb" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.902249 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"593cb74e2e5a2a550aceaa7be6cb48687ac2804727199b412e7d31cb7f9e2c52"} Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.902292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"a13a72d9e1287d70de3470f9c90d6ac87e915d746bd39c81892a2095296a1f05"} Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933263 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/233f4f9e-6a9f-4da5-8698-45692fb68176-operator-scripts\") pod \"233f4f9e-6a9f-4da5-8698-45692fb68176\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933297 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-scripts\") pod \"50d22352-eeec-4772-b285-df0472ad4f67\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933384 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pl4z\" (UniqueName: \"kubernetes.io/projected/233f4f9e-6a9f-4da5-8698-45692fb68176-kube-api-access-9pl4z\") pod \"233f4f9e-6a9f-4da5-8698-45692fb68176\" (UID: \"233f4f9e-6a9f-4da5-8698-45692fb68176\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run\") pod \"50d22352-eeec-4772-b285-df0472ad4f67\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933428 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2sfm\" (UniqueName: \"kubernetes.io/projected/50d22352-eeec-4772-b285-df0472ad4f67-kube-api-access-d2sfm\") pod \"50d22352-eeec-4772-b285-df0472ad4f67\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933495 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run-ovn\") pod \"50d22352-eeec-4772-b285-df0472ad4f67\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933534 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-additional-scripts\") pod \"50d22352-eeec-4772-b285-df0472ad4f67\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933611 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "50d22352-eeec-4772-b285-df0472ad4f67" (UID: "50d22352-eeec-4772-b285-df0472ad4f67"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933673 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-log-ovn\") pod \"50d22352-eeec-4772-b285-df0472ad4f67\" (UID: \"50d22352-eeec-4772-b285-df0472ad4f67\") " Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.933710 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run" (OuterVolumeSpecName: "var-run") pod "50d22352-eeec-4772-b285-df0472ad4f67" (UID: "50d22352-eeec-4772-b285-df0472ad4f67"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934027 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "50d22352-eeec-4772-b285-df0472ad4f67" (UID: "50d22352-eeec-4772-b285-df0472ad4f67"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934271 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/233f4f9e-6a9f-4da5-8698-45692fb68176-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "233f4f9e-6a9f-4da5-8698-45692fb68176" (UID: "233f4f9e-6a9f-4da5-8698-45692fb68176"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934312 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "50d22352-eeec-4772-b285-df0472ad4f67" (UID: "50d22352-eeec-4772-b285-df0472ad4f67"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934714 4823 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-scripts" (OuterVolumeSpecName: "scripts") pod "50d22352-eeec-4772-b285-df0472ad4f67" (UID: "50d22352-eeec-4772-b285-df0472ad4f67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934729 4823 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934758 4823 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934772 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/233f4f9e-6a9f-4da5-8698-45692fb68176-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.934785 4823 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50d22352-eeec-4772-b285-df0472ad4f67-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.942381 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d22352-eeec-4772-b285-df0472ad4f67-kube-api-access-d2sfm" (OuterVolumeSpecName: "kube-api-access-d2sfm") pod "50d22352-eeec-4772-b285-df0472ad4f67" (UID: "50d22352-eeec-4772-b285-df0472ad4f67"). InnerVolumeSpecName "kube-api-access-d2sfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:02 crc kubenswrapper[4823]: I0121 17:36:02.945168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233f4f9e-6a9f-4da5-8698-45692fb68176-kube-api-access-9pl4z" (OuterVolumeSpecName: "kube-api-access-9pl4z") pod "233f4f9e-6a9f-4da5-8698-45692fb68176" (UID: "233f4f9e-6a9f-4da5-8698-45692fb68176"). InnerVolumeSpecName "kube-api-access-9pl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.036538 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50d22352-eeec-4772-b285-df0472ad4f67-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.036975 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pl4z\" (UniqueName: \"kubernetes.io/projected/233f4f9e-6a9f-4da5-8698-45692fb68176-kube-api-access-9pl4z\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.036992 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2sfm\" (UniqueName: \"kubernetes.io/projected/50d22352-eeec-4772-b285-df0472ad4f67-kube-api-access-d2sfm\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.914550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"fe8e8f3d61d3aa7f201402c5bd7776f3d254c0ab78b7a2c847254a309e036ea9"} Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.917713 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerStarted","Data":"583a0e2faa3deb9f5f136fb8e74f7df8738130af6fe90f46666e39764b49e975"} Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.955755 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bfdrb-config-q49j6"] Jan 21 17:36:03 crc kubenswrapper[4823]: I0121 17:36:03.972490 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bfdrb-config-q49j6"] Jan 21 17:36:04 crc kubenswrapper[4823]: I0121 17:36:04.938382 4823 generic.go:334] "Generic (PLEG): container finished" podID="00e6120a-8367-4154-81be-2f80bc318ac2" containerID="220057f3df1fdb09592a246474dbe37f53716ffa21a02eff738acac1cdaf208d" exitCode=0 Jan 21 17:36:04 crc kubenswrapper[4823]: I0121 17:36:04.938556 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x49h2" event={"ID":"00e6120a-8367-4154-81be-2f80bc318ac2","Type":"ContainerDied","Data":"220057f3df1fdb09592a246474dbe37f53716ffa21a02eff738acac1cdaf208d"} Jan 21 17:36:05 crc kubenswrapper[4823]: I0121 17:36:05.354374 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d22352-eeec-4772-b285-df0472ad4f67" path="/var/lib/kubelet/pods/50d22352-eeec-4772-b285-df0472ad4f67/volumes" Jan 21 17:36:05 crc kubenswrapper[4823]: I0121 17:36:05.955411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"609762bed4fadeef4fd294da63c7dd6b756b5075f0fdc065c1f0dccdd5ea6827"} Jan 21 17:36:05 crc kubenswrapper[4823]: I0121 17:36:05.955481 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"20fa8918d12b7a6e6f307eb3bf6e5de1287893b3fcdacb7c2ac857733a7668a5"} Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.179433 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x49h2" Jan 21 17:36:07 crc kubenswrapper[4823]: E0121 17:36:07.305064 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.323237 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8fk\" (UniqueName: \"kubernetes.io/projected/00e6120a-8367-4154-81be-2f80bc318ac2-kube-api-access-lr8fk\") pod \"00e6120a-8367-4154-81be-2f80bc318ac2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.323392 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-config-data\") pod \"00e6120a-8367-4154-81be-2f80bc318ac2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.323561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-combined-ca-bundle\") pod \"00e6120a-8367-4154-81be-2f80bc318ac2\" (UID: \"00e6120a-8367-4154-81be-2f80bc318ac2\") " Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.332194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e6120a-8367-4154-81be-2f80bc318ac2-kube-api-access-lr8fk" (OuterVolumeSpecName: "kube-api-access-lr8fk") pod "00e6120a-8367-4154-81be-2f80bc318ac2" (UID: "00e6120a-8367-4154-81be-2f80bc318ac2"). InnerVolumeSpecName "kube-api-access-lr8fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.357042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00e6120a-8367-4154-81be-2f80bc318ac2" (UID: "00e6120a-8367-4154-81be-2f80bc318ac2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.379428 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-config-data" (OuterVolumeSpecName: "config-data") pod "00e6120a-8367-4154-81be-2f80bc318ac2" (UID: "00e6120a-8367-4154-81be-2f80bc318ac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.452107 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.452182 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e6120a-8367-4154-81be-2f80bc318ac2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.452198 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8fk\" (UniqueName: \"kubernetes.io/projected/00e6120a-8367-4154-81be-2f80bc318ac2-kube-api-access-lr8fk\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.977982 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x49h2" event={"ID":"00e6120a-8367-4154-81be-2f80bc318ac2","Type":"ContainerDied","Data":"e2014157e33cb989c42a729585530c9a2a57fe2c6fd27c8791fdcbc5ad95bfbe"} Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.978310 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2014157e33cb989c42a729585530c9a2a57fe2c6fd27c8791fdcbc5ad95bfbe" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.978056 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x49h2" Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.985830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"ee9f9e971aa3ae5b19cf22349b50987498e35787916b04b79697f8faa222cdaf"} Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.985932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"3801fafc3ec74f8311644e500f6692a9be663751e2bf89933e14634f1b5e0d42"} Jan 21 17:36:07 crc kubenswrapper[4823]: I0121 17:36:07.990600 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerStarted","Data":"0b9c1ca3c3f00c2fea0933ecc1a234742c5be85950363e795331c7103ea267f4"} Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.519704 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4dtxc"] Jan 21 17:36:08 crc kubenswrapper[4823]: E0121 17:36:08.532512 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e6120a-8367-4154-81be-2f80bc318ac2" containerName="keystone-db-sync" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.532556 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e6120a-8367-4154-81be-2f80bc318ac2" containerName="keystone-db-sync" Jan 21 17:36:08 crc kubenswrapper[4823]: E0121 17:36:08.532569 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233f4f9e-6a9f-4da5-8698-45692fb68176" containerName="mariadb-account-create-update" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.532576 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="233f4f9e-6a9f-4da5-8698-45692fb68176" containerName="mariadb-account-create-update" Jan 21 17:36:08 crc kubenswrapper[4823]: E0121 17:36:08.532610 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d22352-eeec-4772-b285-df0472ad4f67" containerName="ovn-config" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.532617 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d22352-eeec-4772-b285-df0472ad4f67" containerName="ovn-config" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.532826 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e6120a-8367-4154-81be-2f80bc318ac2" containerName="keystone-db-sync" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.532867 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="233f4f9e-6a9f-4da5-8698-45692fb68176" containerName="mariadb-account-create-update" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.532879 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d22352-eeec-4772-b285-df0472ad4f67" containerName="ovn-config" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.533677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.540326 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.540789 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.540968 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.541265 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.541453 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jlj99" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.560531 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4dtxc"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.568379 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-6l7sh"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.569754 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.612783 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-6l7sh"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625721 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nt5t\" (UniqueName: \"kubernetes.io/projected/c12ac245-b6c4-472e-8a8b-ca63e607247d-kube-api-access-8nt5t\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625810 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-config-data\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625836 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-scripts\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625866 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-fernet-keys\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625920 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-config\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625962 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.625979 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-credential-keys\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.626004 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.626023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.626041 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpf2\" (UniqueName: \"kubernetes.io/projected/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-kube-api-access-ncpf2\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.626072 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-combined-ca-bundle\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.668755 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59549fbb65-j6m8v"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.670374 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.677003 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.677780 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-nxvh5" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.677971 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.677995 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.692228 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59549fbb65-j6m8v"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.721039 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6l2l8"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.722284 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728230 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-fernet-keys\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-config\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728344 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-scripts\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728381 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a34226-6809-48a6-be65-12d1c025ef32-horizon-secret-key\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728410 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgtg\" (UniqueName: \"kubernetes.io/projected/41a34226-6809-48a6-be65-12d1c025ef32-kube-api-access-fxgtg\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728436 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a34226-6809-48a6-be65-12d1c025ef32-logs\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728511 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-credential-keys\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpf2\" (UniqueName: \"kubernetes.io/projected/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-kube-api-access-ncpf2\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728642 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728658 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-combined-ca-bundle\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728706 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt5t\" (UniqueName: \"kubernetes.io/projected/c12ac245-b6c4-472e-8a8b-ca63e607247d-kube-api-access-8nt5t\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-config-data\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-config-data\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728830 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-scripts\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.728888 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.729118 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5h4v2" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.732110 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-config\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.732847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.733411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.734209 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.746610 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-config-data\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.748538 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-scripts\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.748762 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-fernet-keys\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.748840 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-combined-ca-bundle\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.755189 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6l2l8"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.759561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-credential-keys\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.775945 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt5t\" (UniqueName: \"kubernetes.io/projected/c12ac245-b6c4-472e-8a8b-ca63e607247d-kube-api-access-8nt5t\") pod \"dnsmasq-dns-5c9d85d47c-6l7sh\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.808067 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpf2\" (UniqueName: \"kubernetes.io/projected/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-kube-api-access-ncpf2\") pod \"keystone-bootstrap-4dtxc\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a34226-6809-48a6-be65-12d1c025ef32-horizon-secret-key\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831208 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgtg\" (UniqueName: \"kubernetes.io/projected/41a34226-6809-48a6-be65-12d1c025ef32-kube-api-access-fxgtg\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831233 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a34226-6809-48a6-be65-12d1c025ef32-logs\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831286 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-config\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-combined-ca-bundle\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831376 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfbb\" (UniqueName: \"kubernetes.io/projected/4b459c03-94c5-43e1-bda8-b7e174f3830c-kube-api-access-wsfbb\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-config-data\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.831468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-scripts\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.832757 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a34226-6809-48a6-be65-12d1c025ef32-logs\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.833730 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-config-data\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.836198 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-scripts\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.836702 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fr4qp"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.837355 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a34226-6809-48a6-be65-12d1c025ef32-horizon-secret-key\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.837971 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.849154 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fr4qp"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.881581 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.881654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.881709 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rh7hd" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.882026 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.896994 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgtg\" (UniqueName: \"kubernetes.io/projected/41a34226-6809-48a6-be65-12d1c025ef32-kube-api-access-fxgtg\") pod \"horizon-59549fbb65-j6m8v\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.902219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936698 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d118987-76ea-46aa-9989-274e87e36d3a-etc-machine-id\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-combined-ca-bundle\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936760 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-scripts\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfbb\" (UniqueName: \"kubernetes.io/projected/4b459c03-94c5-43e1-bda8-b7e174f3830c-kube-api-access-wsfbb\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-config-data\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-db-sync-config-data\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936882 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gkfv\" (UniqueName: \"kubernetes.io/projected/2d118987-76ea-46aa-9989-274e87e36d3a-kube-api-access-8gkfv\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936931 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-config\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.936955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-combined-ca-bundle\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.941967 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-config\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.948073 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-combined-ca-bundle\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.990142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfbb\" (UniqueName: \"kubernetes.io/projected/4b459c03-94c5-43e1-bda8-b7e174f3830c-kube-api-access-wsfbb\") pod \"neutron-db-sync-6l2l8\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.998508 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vh8wn"] Jan 21 17:36:08 crc kubenswrapper[4823]: I0121 17:36:08.999883 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.002908 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.012221 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.012512 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5p2x7" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.022948 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-969658cd5-27ggm"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.024702 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.038756 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d118987-76ea-46aa-9989-274e87e36d3a-etc-machine-id\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.038810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-combined-ca-bundle\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.038830 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-scripts\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.039169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d118987-76ea-46aa-9989-274e87e36d3a-etc-machine-id\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.041342 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-config-data\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.041379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-db-sync-config-data\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.045272 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gkfv\" (UniqueName: \"kubernetes.io/projected/2d118987-76ea-46aa-9989-274e87e36d3a-kube-api-access-8gkfv\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.058069 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-db-sync-config-data\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.066358 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-config-data\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.097979 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vh8wn"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.111591 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-combined-ca-bundle\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.113620 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-scripts\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.124340 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gkfv\" (UniqueName: \"kubernetes.io/projected/2d118987-76ea-46aa-9989-274e87e36d3a-kube-api-access-8gkfv\") pod \"cinder-db-sync-fr4qp\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.130241 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-969658cd5-27ggm"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.148367 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.150524 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151632 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjrv6\" (UniqueName: \"kubernetes.io/projected/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-kube-api-access-fjrv6\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151690 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-logs\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151725 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-combined-ca-bundle\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151747 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-config-data\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-scripts\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151818 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-horizon-secret-key\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151867 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdqqd\" (UniqueName: \"kubernetes.io/projected/c29754ad-e324-474f-a0df-d450b9152aa3-kube-api-access-mdqqd\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.151898 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-db-sync-config-data\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.157673 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.157920 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.193866 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qnhv5"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.196099 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.257811 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.266328 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284137 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkpf\" (UniqueName: \"kubernetes.io/projected/a7924f2b-6db5-4473-ae49-91c0d32fa817-kube-api-access-pfkpf\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-scripts\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284653 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-scripts\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284689 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7924f2b-6db5-4473-ae49-91c0d32fa817-logs\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284710 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-config-data\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-horizon-secret-key\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284825 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-scripts\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.284874 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdqqd\" (UniqueName: \"kubernetes.io/projected/c29754ad-e324-474f-a0df-d450b9152aa3-kube-api-access-mdqqd\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285049 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-db-sync-config-data\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285119 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjrv6\" (UniqueName: \"kubernetes.io/projected/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-kube-api-access-fjrv6\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285144 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-run-httpd\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285172 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-logs\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285202 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsjgz\" (UniqueName: \"kubernetes.io/projected/6697a997-d4df-46c4-8520-8d23c6203f87-kube-api-access-hsjgz\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285227 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-log-httpd\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285263 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-combined-ca-bundle\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285283 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-config-data\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285310 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-config-data\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285332 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-combined-ca-bundle\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.285621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-scripts\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.286154 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tzvkv" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.292684 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.309467 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-logs\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.322118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-db-sync-config-data\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.330556 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-config-data\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.331767 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-combined-ca-bundle\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.332871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjrv6\" (UniqueName: \"kubernetes.io/projected/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-kube-api-access-fjrv6\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.361522 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-horizon-secret-key\") pod \"horizon-969658cd5-27ggm\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445616 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-config-data\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445680 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-combined-ca-bundle\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445700 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445731 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkpf\" (UniqueName: \"kubernetes.io/projected/a7924f2b-6db5-4473-ae49-91c0d32fa817-kube-api-access-pfkpf\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445771 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-scripts\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7924f2b-6db5-4473-ae49-91c0d32fa817-logs\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445809 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-config-data\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-scripts\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-run-httpd\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445979 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsjgz\" (UniqueName: \"kubernetes.io/projected/6697a997-d4df-46c4-8520-8d23c6203f87-kube-api-access-hsjgz\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.445996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-log-httpd\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.446643 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-log-httpd\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.452846 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-scripts\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.453252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-run-httpd\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.454209 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-scripts\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.459425 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-combined-ca-bundle\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.464737 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.464869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdqqd\" (UniqueName: \"kubernetes.io/projected/c29754ad-e324-474f-a0df-d450b9152aa3-kube-api-access-mdqqd\") pod \"barbican-db-sync-vh8wn\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.467332 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7924f2b-6db5-4473-ae49-91c0d32fa817-logs\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.477877 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkpf\" (UniqueName: \"kubernetes.io/projected/a7924f2b-6db5-4473-ae49-91c0d32fa817-kube-api-access-pfkpf\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.485502 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsjgz\" (UniqueName: \"kubernetes.io/projected/6697a997-d4df-46c4-8520-8d23c6203f87-kube-api-access-hsjgz\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.488838 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-config-data\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.490206 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-config-data\") pod \"placement-db-sync-qnhv5\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.490647 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.493016 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.495572 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-6l7sh"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.499319 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.500594 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.515277 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qnhv5" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.554727 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qnhv5"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.575217 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-gjk9h"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.577530 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.598708 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.603310 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-gjk9h"] Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.752875 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-config\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.752965 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.753022 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.753205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.753587 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpw28\" (UniqueName: \"kubernetes.io/projected/18489f11-9dbf-4f69-b20c-cded3eae0292-kube-api-access-mpw28\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.856590 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.856733 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpw28\" (UniqueName: \"kubernetes.io/projected/18489f11-9dbf-4f69-b20c-cded3eae0292-kube-api-access-mpw28\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.856881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-config\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.856935 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.856967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.857599 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.858166 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.858273 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-config\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.858828 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.878881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpw28\" (UniqueName: \"kubernetes.io/projected/18489f11-9dbf-4f69-b20c-cded3eae0292-kube-api-access-mpw28\") pod \"dnsmasq-dns-6ffb94d8ff-gjk9h\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:09 crc kubenswrapper[4823]: I0121 17:36:09.896956 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.154515 4823 generic.go:334] "Generic (PLEG): container finished" podID="8285d29e-4f51-4b0c-8dd9-c613317c933c" containerID="140da72e9910ec786c5983426796e28e700ffc14c372085b180c78919159b4e3" exitCode=0 Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.154619 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qk9xv" event={"ID":"8285d29e-4f51-4b0c-8dd9-c613317c933c","Type":"ContainerDied","Data":"140da72e9910ec786c5983426796e28e700ffc14c372085b180c78919159b4e3"} Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.426635 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6l2l8"] Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.800798 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59549fbb65-j6m8v"] Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.859958 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6455c87555-hpzsh"] Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.862308 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.880602 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6455c87555-hpzsh"] Jan 21 17:36:10 crc kubenswrapper[4823]: I0121 17:36:10.974800 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.008975 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/527e03e8-f94d-4d8a-b04f-e867350c6f32-horizon-secret-key\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.009097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-scripts\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.009228 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tsg\" (UniqueName: \"kubernetes.io/projected/527e03e8-f94d-4d8a-b04f-e867350c6f32-kube-api-access-w9tsg\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.009259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-config-data\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.009318 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527e03e8-f94d-4d8a-b04f-e867350c6f32-logs\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.077956 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.110956 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/527e03e8-f94d-4d8a-b04f-e867350c6f32-horizon-secret-key\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.111027 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-scripts\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.111108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9tsg\" (UniqueName: \"kubernetes.io/projected/527e03e8-f94d-4d8a-b04f-e867350c6f32-kube-api-access-w9tsg\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.111130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-config-data\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.111158 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527e03e8-f94d-4d8a-b04f-e867350c6f32-logs\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.113991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527e03e8-f94d-4d8a-b04f-e867350c6f32-logs\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.114698 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-scripts\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.115952 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-config-data\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.117650 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qnhv5"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.122469 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/527e03e8-f94d-4d8a-b04f-e867350c6f32-horizon-secret-key\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.125380 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-969658cd5-27ggm"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.130789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9tsg\" (UniqueName: \"kubernetes.io/projected/527e03e8-f94d-4d8a-b04f-e867350c6f32-kube-api-access-w9tsg\") pod \"horizon-6455c87555-hpzsh\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: W0121 17:36:11.131627 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7924f2b_6db5_4473_ae49_91c0d32fa817.slice/crio-bcc362eb1c9fe031b93862c9c1a23ee5164a5f097a3a84f4eff665ee0d0fd94b WatchSource:0}: Error finding container bcc362eb1c9fe031b93862c9c1a23ee5164a5f097a3a84f4eff665ee0d0fd94b: Status 404 returned error can't find the container with id bcc362eb1c9fe031b93862c9c1a23ee5164a5f097a3a84f4eff665ee0d0fd94b Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.207626 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6697a997-d4df-46c4-8520-8d23c6203f87","Type":"ContainerStarted","Data":"592ea870a90482eca89950162e621e2e2b85554b08d6279099607c8c98b520b4"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.214555 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.225885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"38e9bb80bd1e1a014638d5e5d3a13a6cef68ed7961541b77aafaea6ac94defc9"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.239633 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerStarted","Data":"e3c7bce20716bbc4d63196a0b4b22eea62cf5e1508f90b88acb048019b2705b9"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.258741 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6l2l8" event={"ID":"4b459c03-94c5-43e1-bda8-b7e174f3830c","Type":"ContainerStarted","Data":"96ad8c386975f7c37f9f67d9d8d2cb614ed8ee0e1c8b48791d39efc14e843b92"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.258957 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6l2l8" event={"ID":"4b459c03-94c5-43e1-bda8-b7e174f3830c","Type":"ContainerStarted","Data":"6cfc0fce31f4741f77fcc1c8aa5a8de2f521c90e0047ea355d3f61e5dbf91bf3"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.280375 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-969658cd5-27ggm" event={"ID":"6cd14b37-c130-44e0-b0f3-3508ea1b4e54","Type":"ContainerStarted","Data":"8957438aec79b760904701aef49b7415b3f6b2266f11184a419eebd9cd67dbb6"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.303637 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qnhv5" event={"ID":"a7924f2b-6db5-4473-ae49-91c0d32fa817","Type":"ContainerStarted","Data":"bcc362eb1c9fe031b93862c9c1a23ee5164a5f097a3a84f4eff665ee0d0fd94b"} Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.315465 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fr4qp"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.378582 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4dtxc"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.387085 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.688400723 podStartE2EDuration="1m55.387064582s" podCreationTimestamp="2026-01-21 17:34:16 +0000 UTC" firstStartedPulling="2026-01-21 17:34:32.284265244 +0000 UTC m=+1073.210396104" lastFinishedPulling="2026-01-21 17:36:09.982929103 +0000 UTC m=+1170.909059963" observedRunningTime="2026-01-21 17:36:11.297659043 +0000 UTC m=+1172.223789913" watchObservedRunningTime="2026-01-21 17:36:11.387064582 +0000 UTC m=+1172.313195442" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.419228 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6l2l8" podStartSLOduration=3.419207536 podStartE2EDuration="3.419207536s" podCreationTimestamp="2026-01-21 17:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:11.317894363 +0000 UTC m=+1172.244025223" watchObservedRunningTime="2026-01-21 17:36:11.419207536 +0000 UTC m=+1172.345338396" Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.434832 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vh8wn"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.440294 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-6l7sh"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.479972 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-gjk9h"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.748362 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59549fbb65-j6m8v"] Jan 21 17:36:11 crc kubenswrapper[4823]: I0121 17:36:11.898979 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6455c87555-hpzsh"] Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.402945 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qk9xv" event={"ID":"8285d29e-4f51-4b0c-8dd9-c613317c933c","Type":"ContainerDied","Data":"0ef60a15417af862f335a7ae3f83812177f6cf066e7d5cce9d684a5f870934a7"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.403341 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef60a15417af862f335a7ae3f83812177f6cf066e7d5cce9d684a5f870934a7" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.428254 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6455c87555-hpzsh" event={"ID":"527e03e8-f94d-4d8a-b04f-e867350c6f32","Type":"ContainerStarted","Data":"fe676c8f2a6933b4daaa10f85a44b3bc0b3d38c2c6a14afbc879c7887f3e45fd"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.463994 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4dtxc" event={"ID":"1af0664c-c80c-402b-bdd7-7e7fd1e5711e","Type":"ContainerStarted","Data":"de9f78c788c886fabd6aca7b3e715188fa10ba0ba90c90f3b22dd37bfc6050e7"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.464040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4dtxc" event={"ID":"1af0664c-c80c-402b-bdd7-7e7fd1e5711e","Type":"ContainerStarted","Data":"b7fc184dd324461367e40300fa9867eb716f86d2dcbb05353ef14f4885d4ea66"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.470793 4823 generic.go:334] "Generic (PLEG): container finished" podID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerID="f227d29d7664c616f2461bf6fe4d23afa666ef0c4ca5aec26313109479ab9ae0" exitCode=0 Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.470887 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" event={"ID":"18489f11-9dbf-4f69-b20c-cded3eae0292","Type":"ContainerDied","Data":"f227d29d7664c616f2461bf6fe4d23afa666ef0c4ca5aec26313109479ab9ae0"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.470914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" event={"ID":"18489f11-9dbf-4f69-b20c-cded3eae0292","Type":"ContainerStarted","Data":"ffc8cd90a2f4c4c504d9374d92b5f59277c53689df722a800900d833c32552fe"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.476866 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fr4qp" event={"ID":"2d118987-76ea-46aa-9989-274e87e36d3a","Type":"ContainerStarted","Data":"64dfc9ac5847b858b357b0f1ec62aebe1c33d013b23f39ee568f932c5cc050d2"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.479199 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qk9xv" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.507125 4823 generic.go:334] "Generic (PLEG): container finished" podID="c12ac245-b6c4-472e-8a8b-ca63e607247d" containerID="82fd9f73cb7d26b333c1010612ca504f820942cfeb7960c85f8773b722bd771d" exitCode=0 Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.507256 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" event={"ID":"c12ac245-b6c4-472e-8a8b-ca63e607247d","Type":"ContainerDied","Data":"82fd9f73cb7d26b333c1010612ca504f820942cfeb7960c85f8773b722bd771d"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.507295 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" event={"ID":"c12ac245-b6c4-472e-8a8b-ca63e607247d","Type":"ContainerStarted","Data":"0bef2480f57426dcbed9e7f5bfbbdeb5db1f924e2938d1eb15821867ab2ffa92"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.508380 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4dtxc" podStartSLOduration=4.508366491 podStartE2EDuration="4.508366491s" podCreationTimestamp="2026-01-21 17:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:12.483935408 +0000 UTC m=+1173.410066268" watchObservedRunningTime="2026-01-21 17:36:12.508366491 +0000 UTC m=+1173.434497351" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.516497 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59549fbb65-j6m8v" event={"ID":"41a34226-6809-48a6-be65-12d1c025ef32","Type":"ContainerStarted","Data":"255aa2c04ca3f67d118d94a8b79f25deef33ed237a506c9bf408efbee1add51b"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.525668 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vh8wn" event={"ID":"c29754ad-e324-474f-a0df-d450b9152aa3","Type":"ContainerStarted","Data":"3a99e0228428acd7c3d1421496895ed122a2cff6f4143a9e3cae2968acedccd7"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.557409 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"72b0f3a123ad390f23519cdb04d6025371814c53ce58b6c56e764ae4373acfbb"} Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.679116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-config-data\") pod \"8285d29e-4f51-4b0c-8dd9-c613317c933c\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.679171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-db-sync-config-data\") pod \"8285d29e-4f51-4b0c-8dd9-c613317c933c\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.679240 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-combined-ca-bundle\") pod \"8285d29e-4f51-4b0c-8dd9-c613317c933c\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.679590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmclg\" (UniqueName: \"kubernetes.io/projected/8285d29e-4f51-4b0c-8dd9-c613317c933c-kube-api-access-tmclg\") pod \"8285d29e-4f51-4b0c-8dd9-c613317c933c\" (UID: \"8285d29e-4f51-4b0c-8dd9-c613317c933c\") " Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.688191 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8285d29e-4f51-4b0c-8dd9-c613317c933c" (UID: "8285d29e-4f51-4b0c-8dd9-c613317c933c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.692749 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8285d29e-4f51-4b0c-8dd9-c613317c933c-kube-api-access-tmclg" (OuterVolumeSpecName: "kube-api-access-tmclg") pod "8285d29e-4f51-4b0c-8dd9-c613317c933c" (UID: "8285d29e-4f51-4b0c-8dd9-c613317c933c"). InnerVolumeSpecName "kube-api-access-tmclg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.783525 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmclg\" (UniqueName: \"kubernetes.io/projected/8285d29e-4f51-4b0c-8dd9-c613317c933c-kube-api-access-tmclg\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.783571 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.809827 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8285d29e-4f51-4b0c-8dd9-c613317c933c" (UID: "8285d29e-4f51-4b0c-8dd9-c613317c933c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.862622 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-config-data" (OuterVolumeSpecName: "config-data") pod "8285d29e-4f51-4b0c-8dd9-c613317c933c" (UID: "8285d29e-4f51-4b0c-8dd9-c613317c933c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.886817 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.886869 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8285d29e-4f51-4b0c-8dd9-c613317c933c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:12 crc kubenswrapper[4823]: I0121 17:36:12.962086 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.091968 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nt5t\" (UniqueName: \"kubernetes.io/projected/c12ac245-b6c4-472e-8a8b-ca63e607247d-kube-api-access-8nt5t\") pod \"c12ac245-b6c4-472e-8a8b-ca63e607247d\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.092050 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-sb\") pod \"c12ac245-b6c4-472e-8a8b-ca63e607247d\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.092106 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-nb\") pod \"c12ac245-b6c4-472e-8a8b-ca63e607247d\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.092274 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-dns-svc\") pod \"c12ac245-b6c4-472e-8a8b-ca63e607247d\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.092358 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-config\") pod \"c12ac245-b6c4-472e-8a8b-ca63e607247d\" (UID: \"c12ac245-b6c4-472e-8a8b-ca63e607247d\") " Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.107141 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12ac245-b6c4-472e-8a8b-ca63e607247d-kube-api-access-8nt5t" (OuterVolumeSpecName: "kube-api-access-8nt5t") pod "c12ac245-b6c4-472e-8a8b-ca63e607247d" (UID: "c12ac245-b6c4-472e-8a8b-ca63e607247d"). InnerVolumeSpecName "kube-api-access-8nt5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.166828 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-config" (OuterVolumeSpecName: "config") pod "c12ac245-b6c4-472e-8a8b-ca63e607247d" (UID: "c12ac245-b6c4-472e-8a8b-ca63e607247d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.230805 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c12ac245-b6c4-472e-8a8b-ca63e607247d" (UID: "c12ac245-b6c4-472e-8a8b-ca63e607247d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.233475 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nt5t\" (UniqueName: \"kubernetes.io/projected/c12ac245-b6c4-472e-8a8b-ca63e607247d-kube-api-access-8nt5t\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.233533 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.233551 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.238234 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c12ac245-b6c4-472e-8a8b-ca63e607247d" (UID: "c12ac245-b6c4-472e-8a8b-ca63e607247d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.246419 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c12ac245-b6c4-472e-8a8b-ca63e607247d" (UID: "c12ac245-b6c4-472e-8a8b-ca63e607247d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.344063 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.344101 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12ac245-b6c4-472e-8a8b-ca63e607247d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.482897 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.609875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jd72h" event={"ID":"0d800059-c35e-403a-a930-f1db60cf5c75","Type":"ContainerStarted","Data":"28cec83f764dd22d097ae9ca0aa9d95e707fc99c293ba4ce05d697fc6ba63b15"} Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.640548 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" event={"ID":"c12ac245-b6c4-472e-8a8b-ca63e607247d","Type":"ContainerDied","Data":"0bef2480f57426dcbed9e7f5bfbbdeb5db1f924e2938d1eb15821867ab2ffa92"} Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.640579 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-6l7sh" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.643222 4823 scope.go:117] "RemoveContainer" containerID="82fd9f73cb7d26b333c1010612ca504f820942cfeb7960c85f8773b722bd771d" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.676123 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-jd72h" podStartSLOduration=3.147228235 podStartE2EDuration="51.676097669s" podCreationTimestamp="2026-01-21 17:35:22 +0000 UTC" firstStartedPulling="2026-01-21 17:35:24.194163101 +0000 UTC m=+1125.120293961" lastFinishedPulling="2026-01-21 17:36:12.723032535 +0000 UTC m=+1173.649163395" observedRunningTime="2026-01-21 17:36:13.632550393 +0000 UTC m=+1174.558681253" watchObservedRunningTime="2026-01-21 17:36:13.676097669 +0000 UTC m=+1174.602228529" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.702035 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-6l7sh"] Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.702827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"39dee6293f9d75aadbdb6189635b468038dae04499173edd56ed9ffd22431315"} Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.702872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"e874b580e6baf935fa3bd25adf1c4093382ea8195832efd81ae1415c273f41f7"} Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.702882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"f09e52b5bcf03f9de6422815e3bbd902f04f1e6c8fa9c46a7dd7f5a963abc14e"} Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.721108 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-6l7sh"] Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.732771 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" event={"ID":"18489f11-9dbf-4f69-b20c-cded3eae0292","Type":"ContainerStarted","Data":"3141461e7d904f45b75a8df4f0493c7dade19a50a48974bbee7c5a8903f126bd"} Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.733328 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qk9xv" Jan 21 17:36:13 crc kubenswrapper[4823]: I0121 17:36:13.787211 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" podStartSLOduration=4.787186993 podStartE2EDuration="4.787186993s" podCreationTimestamp="2026-01-21 17:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:13.764532974 +0000 UTC m=+1174.690663834" watchObservedRunningTime="2026-01-21 17:36:13.787186993 +0000 UTC m=+1174.713317853" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.107317 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-gjk9h"] Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.182182 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-s9jlt"] Jan 21 17:36:14 crc kubenswrapper[4823]: E0121 17:36:14.183330 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12ac245-b6c4-472e-8a8b-ca63e607247d" containerName="init" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.183629 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12ac245-b6c4-472e-8a8b-ca63e607247d" containerName="init" Jan 21 17:36:14 crc kubenswrapper[4823]: E0121 17:36:14.183767 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8285d29e-4f51-4b0c-8dd9-c613317c933c" containerName="glance-db-sync" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.183895 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285d29e-4f51-4b0c-8dd9-c613317c933c" containerName="glance-db-sync" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.184167 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8285d29e-4f51-4b0c-8dd9-c613317c933c" containerName="glance-db-sync" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.184265 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12ac245-b6c4-472e-8a8b-ca63e607247d" containerName="init" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.193512 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.225835 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-s9jlt"] Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.394573 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-config\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.394622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-dns-svc\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.394640 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.394676 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.394796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456jd\" (UniqueName: \"kubernetes.io/projected/2e6c61dd-3761-488d-a86d-273b856a1bcc-kube-api-access-456jd\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.496538 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-config\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.496609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-dns-svc\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.496651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.496676 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.496796 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456jd\" (UniqueName: \"kubernetes.io/projected/2e6c61dd-3761-488d-a86d-273b856a1bcc-kube-api-access-456jd\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.506252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-dns-svc\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.506771 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-config\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.509363 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.516506 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.548674 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456jd\" (UniqueName: \"kubernetes.io/projected/2e6c61dd-3761-488d-a86d-273b856a1bcc-kube-api-access-456jd\") pod \"dnsmasq-dns-56798b757f-s9jlt\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.665177 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.874375 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"a745be50f6581df2aa7d69cd3aeb6dad57d06ce9e892b2dc33df3343db248efa"} Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.874419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.932466 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.951230 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.955806 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.959279 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.962718 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x59xs" Jan 21 17:36:14 crc kubenswrapper[4823]: I0121 17:36:14.979383 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.070822 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.070961 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.071014 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.071847 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf25751a26ff3c64f8ae67c52c13c550034b8f3dcc6b86f0b444f95206ccf684"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.071915 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://bf25751a26ff3c64f8ae67c52c13c550034b8f3dcc6b86f0b444f95206ccf684" gracePeriod=600 Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.084115 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-config-data\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.085093 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rznws\" (UniqueName: \"kubernetes.io/projected/31345bdc-736a-4c87-b36d-ebb7e0162789-kube-api-access-rznws\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.085174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-scripts\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.085196 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.085289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-logs\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.085320 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.085354 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189272 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-scripts\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189320 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189399 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-logs\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189471 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-config-data\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.189574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rznws\" (UniqueName: \"kubernetes.io/projected/31345bdc-736a-4c87-b36d-ebb7e0162789-kube-api-access-rznws\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.190221 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.190908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.190840 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-logs\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.212451 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-config-data\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.214487 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-scripts\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.218974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rznws\" (UniqueName: \"kubernetes.io/projected/31345bdc-736a-4c87-b36d-ebb7e0162789-kube-api-access-rznws\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.228953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.349435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.366086 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12ac245-b6c4-472e-8a8b-ca63e607247d" path="/var/lib/kubelet/pods/c12ac245-b6c4-472e-8a8b-ca63e607247d/volumes" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.380731 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-s9jlt"] Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.535174 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.542757 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.546442 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.566668 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.586223 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.716373 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.716735 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-logs\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.716764 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbnf\" (UniqueName: \"kubernetes.io/projected/b01d19d3-e6c2-4083-82bb-4927e4325302-kube-api-access-vfbnf\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.716941 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.716968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.717032 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.717126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824171 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824274 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-logs\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824335 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbnf\" (UniqueName: \"kubernetes.io/projected/b01d19d3-e6c2-4083-82bb-4927e4325302-kube-api-access-vfbnf\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824483 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.824535 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.826565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-logs\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.826873 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.827961 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.839658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.843798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.844131 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.889682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.908445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbnf\" (UniqueName: \"kubernetes.io/projected/b01d19d3-e6c2-4083-82bb-4927e4325302-kube-api-access-vfbnf\") pod \"glance-default-internal-api-0\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.931824 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="bf25751a26ff3c64f8ae67c52c13c550034b8f3dcc6b86f0b444f95206ccf684" exitCode=0 Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.931916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"bf25751a26ff3c64f8ae67c52c13c550034b8f3dcc6b86f0b444f95206ccf684"} Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.931955 4823 scope.go:117] "RemoveContainer" containerID="fcf0ff8adb2bfb185b4729793f83cb8f174b95d31f0510e681726cc4e1eb2380" Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.949874 4823 generic.go:334] "Generic (PLEG): container finished" podID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerID="708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be" exitCode=0 Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.949946 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" event={"ID":"2e6c61dd-3761-488d-a86d-273b856a1bcc","Type":"ContainerDied","Data":"708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be"} Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.950026 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" event={"ID":"2e6c61dd-3761-488d-a86d-273b856a1bcc","Type":"ContainerStarted","Data":"b9e4a3a93658d7d0df546b26b5a9ded11ec6db4864065309e343962492b8bcdd"} Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.985776 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerName="dnsmasq-dns" containerID="cri-o://3141461e7d904f45b75a8df4f0493c7dade19a50a48974bbee7c5a8903f126bd" gracePeriod=10 Jan 21 17:36:15 crc kubenswrapper[4823]: I0121 17:36:15.985930 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1146f69b-d935-4a56-9f65-e96bf9539c14","Type":"ContainerStarted","Data":"ed0858ec037b775365938082b0c042320823b349c29ba0ddfb6d6ff6a6ff3885"} Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.062592 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=60.661147485 podStartE2EDuration="1m10.062567594s" podCreationTimestamp="2026-01-21 17:35:06 +0000 UTC" firstStartedPulling="2026-01-21 17:36:00.541082645 +0000 UTC m=+1161.467213505" lastFinishedPulling="2026-01-21 17:36:09.942502744 +0000 UTC m=+1170.868633614" observedRunningTime="2026-01-21 17:36:16.041037082 +0000 UTC m=+1176.967167942" watchObservedRunningTime="2026-01-21 17:36:16.062567594 +0000 UTC m=+1176.988698454" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.188461 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.481242 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-s9jlt"] Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.551027 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-srv5l"] Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.553277 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.556288 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-srv5l"] Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.563419 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.666013 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.666154 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-config\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.666188 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.666216 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jptlf\" (UniqueName: \"kubernetes.io/projected/f3254a75-e9bf-4947-b472-c4d824599c49-kube-api-access-jptlf\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.666259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.666289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.771178 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.771378 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-config\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.772231 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.772233 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.772264 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jptlf\" (UniqueName: \"kubernetes.io/projected/f3254a75-e9bf-4947-b472-c4d824599c49-kube-api-access-jptlf\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.772361 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.772393 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.773262 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-config\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.774437 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.774492 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.775272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.809532 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jptlf\" (UniqueName: \"kubernetes.io/projected/f3254a75-e9bf-4947-b472-c4d824599c49-kube-api-access-jptlf\") pod \"dnsmasq-dns-56df8fb6b7-srv5l\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.925494 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:16 crc kubenswrapper[4823]: I0121 17:36:16.927913 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:16 crc kubenswrapper[4823]: W0121 17:36:16.959007 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31345bdc_736a_4c87_b36d_ebb7e0162789.slice/crio-14e95015d4f20953e4529882a5ab1396de360871a6b76696aa539decdaa3c5a9 WatchSource:0}: Error finding container 14e95015d4f20953e4529882a5ab1396de360871a6b76696aa539decdaa3c5a9: Status 404 returned error can't find the container with id 14e95015d4f20953e4529882a5ab1396de360871a6b76696aa539decdaa3c5a9 Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.026771 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" event={"ID":"2e6c61dd-3761-488d-a86d-273b856a1bcc","Type":"ContainerStarted","Data":"97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259"} Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.027004 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerName="dnsmasq-dns" containerID="cri-o://97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259" gracePeriod=10 Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.027345 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.048900 4823 generic.go:334] "Generic (PLEG): container finished" podID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerID="3141461e7d904f45b75a8df4f0493c7dade19a50a48974bbee7c5a8903f126bd" exitCode=0 Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.049225 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" event={"ID":"18489f11-9dbf-4f69-b20c-cded3eae0292","Type":"ContainerDied","Data":"3141461e7d904f45b75a8df4f0493c7dade19a50a48974bbee7c5a8903f126bd"} Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.053782 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31345bdc-736a-4c87-b36d-ebb7e0162789","Type":"ContainerStarted","Data":"14e95015d4f20953e4529882a5ab1396de360871a6b76696aa539decdaa3c5a9"} Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.103611 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"e5cf874111542cda3e34240991e3ef5c73b1f1132ce5389832d5612a1548617a"} Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.129589 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" podStartSLOduration=3.129561503 podStartE2EDuration="3.129561503s" podCreationTimestamp="2026-01-21 17:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:17.069766566 +0000 UTC m=+1177.995897436" watchObservedRunningTime="2026-01-21 17:36:17.129561503 +0000 UTC m=+1178.055692363" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.409083 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.506999 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-dns-svc\") pod \"18489f11-9dbf-4f69-b20c-cded3eae0292\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.507492 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpw28\" (UniqueName: \"kubernetes.io/projected/18489f11-9dbf-4f69-b20c-cded3eae0292-kube-api-access-mpw28\") pod \"18489f11-9dbf-4f69-b20c-cded3eae0292\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.507522 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-nb\") pod \"18489f11-9dbf-4f69-b20c-cded3eae0292\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.507769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-config\") pod \"18489f11-9dbf-4f69-b20c-cded3eae0292\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.507917 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-sb\") pod \"18489f11-9dbf-4f69-b20c-cded3eae0292\" (UID: \"18489f11-9dbf-4f69-b20c-cded3eae0292\") " Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.562943 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18489f11-9dbf-4f69-b20c-cded3eae0292-kube-api-access-mpw28" (OuterVolumeSpecName: "kube-api-access-mpw28") pod "18489f11-9dbf-4f69-b20c-cded3eae0292" (UID: "18489f11-9dbf-4f69-b20c-cded3eae0292"). InnerVolumeSpecName "kube-api-access-mpw28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.589653 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-srv5l"] Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.590975 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-config" (OuterVolumeSpecName: "config") pod "18489f11-9dbf-4f69-b20c-cded3eae0292" (UID: "18489f11-9dbf-4f69-b20c-cded3eae0292"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.611068 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.611108 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpw28\" (UniqueName: \"kubernetes.io/projected/18489f11-9dbf-4f69-b20c-cded3eae0292-kube-api-access-mpw28\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.621438 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:17 crc kubenswrapper[4823]: W0121 17:36:17.627155 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3254a75_e9bf_4947_b472_c4d824599c49.slice/crio-febc47bda305dd147fa805d6e62372619915cbf0d0ef9378f49a93ad8807138c WatchSource:0}: Error finding container febc47bda305dd147fa805d6e62372619915cbf0d0ef9378f49a93ad8807138c: Status 404 returned error can't find the container with id febc47bda305dd147fa805d6e62372619915cbf0d0ef9378f49a93ad8807138c Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.672269 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18489f11-9dbf-4f69-b20c-cded3eae0292" (UID: "18489f11-9dbf-4f69-b20c-cded3eae0292"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.713201 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.740255 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18489f11-9dbf-4f69-b20c-cded3eae0292" (UID: "18489f11-9dbf-4f69-b20c-cded3eae0292"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.802582 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18489f11-9dbf-4f69-b20c-cded3eae0292" (UID: "18489f11-9dbf-4f69-b20c-cded3eae0292"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.824224 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:17 crc kubenswrapper[4823]: I0121 17:36:17.824285 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18489f11-9dbf-4f69-b20c-cded3eae0292-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.162319 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" event={"ID":"18489f11-9dbf-4f69-b20c-cded3eae0292","Type":"ContainerDied","Data":"ffc8cd90a2f4c4c504d9374d92b5f59277c53689df722a800900d833c32552fe"} Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.162655 4823 scope.go:117] "RemoveContainer" containerID="3141461e7d904f45b75a8df4f0493c7dade19a50a48974bbee7c5a8903f126bd" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.162803 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-gjk9h" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.192137 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.195044 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b01d19d3-e6c2-4083-82bb-4927e4325302","Type":"ContainerStarted","Data":"2f154fdb9ab7d7b47b4b52026469657f8af74153f94481b1aa2edc7e0ed5673a"} Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.202713 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" event={"ID":"f3254a75-e9bf-4947-b472-c4d824599c49","Type":"ContainerStarted","Data":"febc47bda305dd147fa805d6e62372619915cbf0d0ef9378f49a93ad8807138c"} Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.235818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-dns-svc\") pod \"2e6c61dd-3761-488d-a86d-273b856a1bcc\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.235922 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-sb\") pod \"2e6c61dd-3761-488d-a86d-273b856a1bcc\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.235948 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-456jd\" (UniqueName: \"kubernetes.io/projected/2e6c61dd-3761-488d-a86d-273b856a1bcc-kube-api-access-456jd\") pod \"2e6c61dd-3761-488d-a86d-273b856a1bcc\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.236010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-config\") pod \"2e6c61dd-3761-488d-a86d-273b856a1bcc\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.236162 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-nb\") pod \"2e6c61dd-3761-488d-a86d-273b856a1bcc\" (UID: \"2e6c61dd-3761-488d-a86d-273b856a1bcc\") " Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.247909 4823 generic.go:334] "Generic (PLEG): container finished" podID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerID="97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259" exitCode=0 Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.249016 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.254416 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" event={"ID":"2e6c61dd-3761-488d-a86d-273b856a1bcc","Type":"ContainerDied","Data":"97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259"} Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.254475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-s9jlt" event={"ID":"2e6c61dd-3761-488d-a86d-273b856a1bcc","Type":"ContainerDied","Data":"b9e4a3a93658d7d0df546b26b5a9ded11ec6db4864065309e343962492b8bcdd"} Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.256058 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6c61dd-3761-488d-a86d-273b856a1bcc-kube-api-access-456jd" (OuterVolumeSpecName: "kube-api-access-456jd") pod "2e6c61dd-3761-488d-a86d-273b856a1bcc" (UID: "2e6c61dd-3761-488d-a86d-273b856a1bcc"). InnerVolumeSpecName "kube-api-access-456jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.271019 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-gjk9h"] Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.321213 4823 scope.go:117] "RemoveContainer" containerID="f227d29d7664c616f2461bf6fe4d23afa666ef0c4ca5aec26313109479ab9ae0" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.327584 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-gjk9h"] Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.339309 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-456jd\" (UniqueName: \"kubernetes.io/projected/2e6c61dd-3761-488d-a86d-273b856a1bcc-kube-api-access-456jd\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.425063 4823 scope.go:117] "RemoveContainer" containerID="97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.486601 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.499990 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.609195 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e6c61dd-3761-488d-a86d-273b856a1bcc" (UID: "2e6c61dd-3761-488d-a86d-273b856a1bcc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.621675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-config" (OuterVolumeSpecName: "config") pod "2e6c61dd-3761-488d-a86d-273b856a1bcc" (UID: "2e6c61dd-3761-488d-a86d-273b856a1bcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.621831 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e6c61dd-3761-488d-a86d-273b856a1bcc" (UID: "2e6c61dd-3761-488d-a86d-273b856a1bcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.627607 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e6c61dd-3761-488d-a86d-273b856a1bcc" (UID: "2e6c61dd-3761-488d-a86d-273b856a1bcc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.661125 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.661167 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.661178 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.661187 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e6c61dd-3761-488d-a86d-273b856a1bcc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.808469 4823 scope.go:117] "RemoveContainer" containerID="708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.888151 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-s9jlt"] Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.901359 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-s9jlt"] Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.902693 4823 scope.go:117] "RemoveContainer" containerID="97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259" Jan 21 17:36:18 crc kubenswrapper[4823]: E0121 17:36:18.903905 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259\": container with ID starting with 97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259 not found: ID does not exist" containerID="97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.903959 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259"} err="failed to get container status \"97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259\": rpc error: code = NotFound desc = could not find container \"97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259\": container with ID starting with 97ef1fbeb54f6db3087340e238abb3702afc9a5b6a5eb216bd6a96871c71e259 not found: ID does not exist" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.903987 4823 scope.go:117] "RemoveContainer" containerID="708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be" Jan 21 17:36:18 crc kubenswrapper[4823]: E0121 17:36:18.909674 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be\": container with ID starting with 708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be not found: ID does not exist" containerID="708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be" Jan 21 17:36:18 crc kubenswrapper[4823]: I0121 17:36:18.909713 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be"} err="failed to get container status \"708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be\": rpc error: code = NotFound desc = could not find container \"708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be\": container with ID starting with 708e554a29dfda7b3fe9aa4804799a7e8d32c221604e5a61f3679a31a022a7be not found: ID does not exist" Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.333432 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b01d19d3-e6c2-4083-82bb-4927e4325302","Type":"ContainerStarted","Data":"2bf3c7e21b24bc95d05dc7bc3e739fd60d4d32c1835625a19bee46ec020fc582"} Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.336117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31345bdc-736a-4c87-b36d-ebb7e0162789","Type":"ContainerStarted","Data":"2b99d49c7f921eb81d72298c94a5b90ae8e22c642161d1a1961f0b2eac476828"} Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.354671 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3254a75-e9bf-4947-b472-c4d824599c49" containerID="94bc527df58640219a854f5d371c4bfcf1e412934dea61faf7569c7fad343893" exitCode=0 Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.372196 4823 generic.go:334] "Generic (PLEG): container finished" podID="1af0664c-c80c-402b-bdd7-7e7fd1e5711e" containerID="de9f78c788c886fabd6aca7b3e715188fa10ba0ba90c90f3b22dd37bfc6050e7" exitCode=0 Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.380440 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" path="/var/lib/kubelet/pods/18489f11-9dbf-4f69-b20c-cded3eae0292/volumes" Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.382533 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" path="/var/lib/kubelet/pods/2e6c61dd-3761-488d-a86d-273b856a1bcc/volumes" Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.383745 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.383802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" event={"ID":"f3254a75-e9bf-4947-b472-c4d824599c49","Type":"ContainerDied","Data":"94bc527df58640219a854f5d371c4bfcf1e412934dea61faf7569c7fad343893"} Jan 21 17:36:19 crc kubenswrapper[4823]: I0121 17:36:19.383912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4dtxc" event={"ID":"1af0664c-c80c-402b-bdd7-7e7fd1e5711e","Type":"ContainerDied","Data":"de9f78c788c886fabd6aca7b3e715188fa10ba0ba90c90f3b22dd37bfc6050e7"} Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.376699 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.389089 4823 generic.go:334] "Generic (PLEG): container finished" podID="0d800059-c35e-403a-a930-f1db60cf5c75" containerID="28cec83f764dd22d097ae9ca0aa9d95e707fc99c293ba4ce05d697fc6ba63b15" exitCode=0 Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.389181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jd72h" event={"ID":"0d800059-c35e-403a-a930-f1db60cf5c75","Type":"ContainerDied","Data":"28cec83f764dd22d097ae9ca0aa9d95e707fc99c293ba4ce05d697fc6ba63b15"} Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.408002 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31345bdc-736a-4c87-b36d-ebb7e0162789","Type":"ContainerStarted","Data":"0adf20c8b4cf5f4e67a21a6d6b6d96e356f39461463592cb78b63269d2ea40e4"} Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.428509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" event={"ID":"f3254a75-e9bf-4947-b472-c4d824599c49","Type":"ContainerStarted","Data":"8b5252e3c9d84c533990fe68e839f9a7db254971e297bc194e33ce04c493e486"} Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.428725 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.485059 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.511221 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.511192821 podStartE2EDuration="7.511192821s" podCreationTimestamp="2026-01-21 17:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:20.472557587 +0000 UTC m=+1181.398688447" watchObservedRunningTime="2026-01-21 17:36:20.511192821 +0000 UTC m=+1181.437323691" Jan 21 17:36:20 crc kubenswrapper[4823]: I0121 17:36:20.560640 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" podStartSLOduration=4.5606081320000005 podStartE2EDuration="4.560608132s" podCreationTimestamp="2026-01-21 17:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:20.530119079 +0000 UTC m=+1181.456249959" watchObservedRunningTime="2026-01-21 17:36:20.560608132 +0000 UTC m=+1181.486738992" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.462754 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b01d19d3-e6c2-4083-82bb-4927e4325302","Type":"ContainerStarted","Data":"47ea7e6cce0ea3ab8371b174abd53673cac810c2e4f7c64bf6ac4939e43d7dab"} Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.463472 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-log" containerID="cri-o://2b99d49c7f921eb81d72298c94a5b90ae8e22c642161d1a1961f0b2eac476828" gracePeriod=30 Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.463734 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-log" containerID="cri-o://2bf3c7e21b24bc95d05dc7bc3e739fd60d4d32c1835625a19bee46ec020fc582" gracePeriod=30 Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.463769 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-httpd" containerID="cri-o://0adf20c8b4cf5f4e67a21a6d6b6d96e356f39461463592cb78b63269d2ea40e4" gracePeriod=30 Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.463870 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-httpd" containerID="cri-o://47ea7e6cce0ea3ab8371b174abd53673cac810c2e4f7c64bf6ac4939e43d7dab" gracePeriod=30 Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.522206 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.522176876 podStartE2EDuration="7.522176876s" podCreationTimestamp="2026-01-21 17:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:21.506251693 +0000 UTC m=+1182.432382573" watchObservedRunningTime="2026-01-21 17:36:21.522176876 +0000 UTC m=+1182.448307736" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.711383 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-969658cd5-27ggm"] Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.755290 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64bdb4885b-ddprk"] Jan 21 17:36:21 crc kubenswrapper[4823]: E0121 17:36:21.756434 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerName="init" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.756537 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerName="init" Jan 21 17:36:21 crc kubenswrapper[4823]: E0121 17:36:21.756736 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerName="init" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.756828 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerName="init" Jan 21 17:36:21 crc kubenswrapper[4823]: E0121 17:36:21.756908 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerName="dnsmasq-dns" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.756964 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerName="dnsmasq-dns" Jan 21 17:36:21 crc kubenswrapper[4823]: E0121 17:36:21.757620 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerName="dnsmasq-dns" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.757720 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerName="dnsmasq-dns" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.802828 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6c61dd-3761-488d-a86d-273b856a1bcc" containerName="dnsmasq-dns" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.802925 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="18489f11-9dbf-4f69-b20c-cded3eae0292" containerName="dnsmasq-dns" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.865100 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.865956 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64bdb4885b-ddprk"] Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.870595 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.904903 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6455c87555-hpzsh"] Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.926396 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-combined-ca-bundle\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.926604 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-secret-key\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.926646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-tls-certs\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.926705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7890c9eb-67a6-4c41-af5b-c57f0fddc533-logs\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.927048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-scripts\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.927129 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-config-data\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.927337 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw759\" (UniqueName: \"kubernetes.io/projected/7890c9eb-67a6-4c41-af5b-c57f0fddc533-kube-api-access-jw759\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.953749 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66bddd7dd6-67b6t"] Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.960349 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:21 crc kubenswrapper[4823]: I0121 17:36:21.990714 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bddd7dd6-67b6t"] Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029232 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/709299a1-f499-447b-a738-efe1b32c7abf-logs\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029309 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/709299a1-f499-447b-a738-efe1b32c7abf-scripts\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029364 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-combined-ca-bundle\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029482 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/709299a1-f499-447b-a738-efe1b32c7abf-config-data\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029555 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-horizon-tls-certs\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-secret-key\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-tls-certs\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7890c9eb-67a6-4c41-af5b-c57f0fddc533-logs\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2kt\" (UniqueName: \"kubernetes.io/projected/709299a1-f499-447b-a738-efe1b32c7abf-kube-api-access-qw2kt\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-scripts\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029961 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-horizon-secret-key\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.029987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-combined-ca-bundle\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.030008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-config-data\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.031292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw759\" (UniqueName: \"kubernetes.io/projected/7890c9eb-67a6-4c41-af5b-c57f0fddc533-kube-api-access-jw759\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.032358 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-scripts\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.032726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-config-data\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.033662 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7890c9eb-67a6-4c41-af5b-c57f0fddc533-logs\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.040337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-combined-ca-bundle\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.041000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-tls-certs\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.060371 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-secret-key\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.073700 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw759\" (UniqueName: \"kubernetes.io/projected/7890c9eb-67a6-4c41-af5b-c57f0fddc533-kube-api-access-jw759\") pod \"horizon-64bdb4885b-ddprk\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.137993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/709299a1-f499-447b-a738-efe1b32c7abf-logs\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.138810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/709299a1-f499-447b-a738-efe1b32c7abf-scripts\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.138693 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/709299a1-f499-447b-a738-efe1b32c7abf-logs\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.140008 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/709299a1-f499-447b-a738-efe1b32c7abf-scripts\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.141258 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/709299a1-f499-447b-a738-efe1b32c7abf-config-data\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.141302 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-horizon-tls-certs\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.142367 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/709299a1-f499-447b-a738-efe1b32c7abf-config-data\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.143659 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2kt\" (UniqueName: \"kubernetes.io/projected/709299a1-f499-447b-a738-efe1b32c7abf-kube-api-access-qw2kt\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.144635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-horizon-secret-key\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.147046 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-horizon-tls-certs\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.147409 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-combined-ca-bundle\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.149174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-horizon-secret-key\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.155478 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709299a1-f499-447b-a738-efe1b32c7abf-combined-ca-bundle\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.166139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2kt\" (UniqueName: \"kubernetes.io/projected/709299a1-f499-447b-a738-efe1b32c7abf-kube-api-access-qw2kt\") pod \"horizon-66bddd7dd6-67b6t\" (UID: \"709299a1-f499-447b-a738-efe1b32c7abf\") " pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.251303 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.307929 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.489643 4823 generic.go:334] "Generic (PLEG): container finished" podID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerID="0adf20c8b4cf5f4e67a21a6d6b6d96e356f39461463592cb78b63269d2ea40e4" exitCode=0 Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.489672 4823 generic.go:334] "Generic (PLEG): container finished" podID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerID="2b99d49c7f921eb81d72298c94a5b90ae8e22c642161d1a1961f0b2eac476828" exitCode=143 Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.489695 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31345bdc-736a-4c87-b36d-ebb7e0162789","Type":"ContainerDied","Data":"0adf20c8b4cf5f4e67a21a6d6b6d96e356f39461463592cb78b63269d2ea40e4"} Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.489763 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31345bdc-736a-4c87-b36d-ebb7e0162789","Type":"ContainerDied","Data":"2b99d49c7f921eb81d72298c94a5b90ae8e22c642161d1a1961f0b2eac476828"} Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.493721 4823 generic.go:334] "Generic (PLEG): container finished" podID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerID="47ea7e6cce0ea3ab8371b174abd53673cac810c2e4f7c64bf6ac4939e43d7dab" exitCode=0 Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.493742 4823 generic.go:334] "Generic (PLEG): container finished" podID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerID="2bf3c7e21b24bc95d05dc7bc3e739fd60d4d32c1835625a19bee46ec020fc582" exitCode=143 Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.493759 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b01d19d3-e6c2-4083-82bb-4927e4325302","Type":"ContainerDied","Data":"47ea7e6cce0ea3ab8371b174abd53673cac810c2e4f7c64bf6ac4939e43d7dab"} Jan 21 17:36:22 crc kubenswrapper[4823]: I0121 17:36:22.493777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b01d19d3-e6c2-4083-82bb-4927e4325302","Type":"ContainerDied","Data":"2bf3c7e21b24bc95d05dc7bc3e739fd60d4d32c1835625a19bee46ec020fc582"} Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.212449 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.213338 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="config-reloader" containerID="cri-o://583a0e2faa3deb9f5f136fb8e74f7df8738130af6fe90f46666e39764b49e975" gracePeriod=600 Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.213941 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" containerID="cri-o://e3c7bce20716bbc4d63196a0b4b22eea62cf5e1508f90b88acb048019b2705b9" gracePeriod=600 Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.214004 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="thanos-sidecar" containerID="cri-o://0b9c1ca3c3f00c2fea0933ecc1a234742c5be85950363e795331c7103ea267f4" gracePeriod=600 Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.392754 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jd72h" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.531650 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlfnr\" (UniqueName: \"kubernetes.io/projected/0d800059-c35e-403a-a930-f1db60cf5c75-kube-api-access-xlfnr\") pod \"0d800059-c35e-403a-a930-f1db60cf5c75\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.531724 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-db-sync-config-data\") pod \"0d800059-c35e-403a-a930-f1db60cf5c75\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.531825 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-config-data\") pod \"0d800059-c35e-403a-a930-f1db60cf5c75\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.532146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-combined-ca-bundle\") pod \"0d800059-c35e-403a-a930-f1db60cf5c75\" (UID: \"0d800059-c35e-403a-a930-f1db60cf5c75\") " Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.545410 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0d800059-c35e-403a-a930-f1db60cf5c75" (UID: "0d800059-c35e-403a-a930-f1db60cf5c75"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.551097 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d800059-c35e-403a-a930-f1db60cf5c75-kube-api-access-xlfnr" (OuterVolumeSpecName: "kube-api-access-xlfnr") pod "0d800059-c35e-403a-a930-f1db60cf5c75" (UID: "0d800059-c35e-403a-a930-f1db60cf5c75"). InnerVolumeSpecName "kube-api-access-xlfnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.566989 4823 generic.go:334] "Generic (PLEG): container finished" podID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerID="e3c7bce20716bbc4d63196a0b4b22eea62cf5e1508f90b88acb048019b2705b9" exitCode=0 Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.567052 4823 generic.go:334] "Generic (PLEG): container finished" podID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerID="0b9c1ca3c3f00c2fea0933ecc1a234742c5be85950363e795331c7103ea267f4" exitCode=0 Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.567127 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerDied","Data":"e3c7bce20716bbc4d63196a0b4b22eea62cf5e1508f90b88acb048019b2705b9"} Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.567169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerDied","Data":"0b9c1ca3c3f00c2fea0933ecc1a234742c5be85950363e795331c7103ea267f4"} Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.571534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jd72h" event={"ID":"0d800059-c35e-403a-a930-f1db60cf5c75","Type":"ContainerDied","Data":"3bb64add4ee4c6bd8f60a94f493acb2b97dc8a69f06b16555f716c02a410982f"} Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.571579 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb64add4ee4c6bd8f60a94f493acb2b97dc8a69f06b16555f716c02a410982f" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.571778 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jd72h" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.586685 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d800059-c35e-403a-a930-f1db60cf5c75" (UID: "0d800059-c35e-403a-a930-f1db60cf5c75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.626363 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-config-data" (OuterVolumeSpecName: "config-data") pod "0d800059-c35e-403a-a930-f1db60cf5c75" (UID: "0d800059-c35e-403a-a930-f1db60cf5c75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.634867 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.634911 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlfnr\" (UniqueName: \"kubernetes.io/projected/0d800059-c35e-403a-a930-f1db60cf5c75-kube-api-access-xlfnr\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.634928 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:24 crc kubenswrapper[4823]: I0121 17:36:24.634940 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d800059-c35e-403a-a930-f1db60cf5c75-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.590162 4823 generic.go:334] "Generic (PLEG): container finished" podID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerID="583a0e2faa3deb9f5f136fb8e74f7df8738130af6fe90f46666e39764b49e975" exitCode=0 Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.590681 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerDied","Data":"583a0e2faa3deb9f5f136fb8e74f7df8738130af6fe90f46666e39764b49e975"} Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.679727 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:36:25 crc kubenswrapper[4823]: E0121 17:36:25.680437 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d800059-c35e-403a-a930-f1db60cf5c75" containerName="watcher-db-sync" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.680469 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d800059-c35e-403a-a930-f1db60cf5c75" containerName="watcher-db-sync" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.682830 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d800059-c35e-403a-a930-f1db60cf5c75" containerName="watcher-db-sync" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.684745 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.690013 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-ft6vh" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.690277 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.690322 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.700171 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.723751 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.725106 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.729289 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzb5\" (UniqueName: \"kubernetes.io/projected/639e3107-061d-4225-9344-7f2e0a3099b8-kube-api-access-xgzb5\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780405 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639e3107-061d-4225-9344-7f2e0a3099b8-logs\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780531 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-config-data\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vlb\" (UniqueName: \"kubernetes.io/projected/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-kube-api-access-f2vlb\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780721 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780769 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-logs\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.780798 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.815845 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.836508 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.839004 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.849464 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.849460 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.888730 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgzb5\" (UniqueName: \"kubernetes.io/projected/639e3107-061d-4225-9344-7f2e0a3099b8-kube-api-access-xgzb5\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889198 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639e3107-061d-4225-9344-7f2e0a3099b8-logs\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889670 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-config-data\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889763 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2vlb\" (UniqueName: \"kubernetes.io/projected/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-kube-api-access-f2vlb\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889841 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.889952 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-logs\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.890029 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.893102 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-logs\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.895922 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639e3107-061d-4225-9344-7f2e0a3099b8-logs\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.896533 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.896637 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.896829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.897144 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-config-data\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.901166 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.907615 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgzb5\" (UniqueName: \"kubernetes.io/projected/639e3107-061d-4225-9344-7f2e0a3099b8-kube-api-access-xgzb5\") pod \"watcher-decision-engine-0\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.911763 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2vlb\" (UniqueName: \"kubernetes.io/projected/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-kube-api-access-f2vlb\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.912459 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " pod="openstack/watcher-api-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.992318 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.992765 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82262\" (UniqueName: \"kubernetes.io/projected/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-kube-api-access-82262\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.992807 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-logs\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:25 crc kubenswrapper[4823]: I0121 17:36:25.992846 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-config-data\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.020292 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.059223 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.095517 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.096838 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82262\" (UniqueName: \"kubernetes.io/projected/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-kube-api-access-82262\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.096997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-logs\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.097450 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-config-data\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.097363 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-logs\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.109722 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-config-data\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.119760 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82262\" (UniqueName: \"kubernetes.io/projected/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-kube-api-access-82262\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.120561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624bfb5c-23b4-4da4-ba5a-15db0c47cf2e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e\") " pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.166645 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 21 17:36:26 crc kubenswrapper[4823]: I0121 17:36:26.928488 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:36:27 crc kubenswrapper[4823]: I0121 17:36:27.004364 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4tqlq"] Jan 21 17:36:27 crc kubenswrapper[4823]: I0121 17:36:27.004715 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" containerID="cri-o://00e4a785cada32a80be87f9e5206df3ac28a9ced6bc0db7f7c58110d900b102f" gracePeriod=10 Jan 21 17:36:27 crc kubenswrapper[4823]: I0121 17:36:27.139541 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Jan 21 17:36:27 crc kubenswrapper[4823]: I0121 17:36:27.617780 4823 generic.go:334] "Generic (PLEG): container finished" podID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerID="00e4a785cada32a80be87f9e5206df3ac28a9ced6bc0db7f7c58110d900b102f" exitCode=0 Jan 21 17:36:27 crc kubenswrapper[4823]: I0121 17:36:27.617829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" event={"ID":"00ca24fe-07a5-4b20-a18d-5f62c510030a","Type":"ContainerDied","Data":"00e4a785cada32a80be87f9e5206df3ac28a9ced6bc0db7f7c58110d900b102f"} Jan 21 17:36:28 crc kubenswrapper[4823]: I0121 17:36:28.483831 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Jan 21 17:36:32 crc kubenswrapper[4823]: I0121 17:36:32.139158 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Jan 21 17:36:32 crc kubenswrapper[4823]: E0121 17:36:32.847874 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 17:36:32 crc kubenswrapper[4823]: E0121 17:36:32.848140 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n84h646hcchcdhfh666hc9h5fdh559h5f5h578h64bh646hc7h58dh85h67bh577h55dh649h684h667hd6hfbh67fh55bh5c5h5ch5b4h555h94h5cbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9tsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6455c87555-hpzsh_openstack(527e03e8-f94d-4d8a-b04f-e867350c6f32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:36:32 crc kubenswrapper[4823]: E0121 17:36:32.854427 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6455c87555-hpzsh" podUID="527e03e8-f94d-4d8a-b04f-e867350c6f32" Jan 21 17:36:32 crc kubenswrapper[4823]: E0121 17:36:32.896725 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 17:36:32 crc kubenswrapper[4823]: E0121 17:36:32.896916 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n664h545h7h56dhd8h548hdh58ch578h575h684h5b8hb6h679hffh554hfdhd7hb5h599hffhb8hd5h688hdfh57dh8bh656h675hd5h77h66fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxgtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59549fbb65-j6m8v_openstack(41a34226-6809-48a6-be65-12d1c025ef32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:36:32 crc kubenswrapper[4823]: E0121 17:36:32.900445 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59549fbb65-j6m8v" podUID="41a34226-6809-48a6-be65-12d1c025ef32" Jan 21 17:36:33 crc kubenswrapper[4823]: I0121 17:36:33.483262 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Jan 21 17:36:37 crc kubenswrapper[4823]: I0121 17:36:37.139196 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Jan 21 17:36:37 crc kubenswrapper[4823]: I0121 17:36:37.139756 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:36:37 crc kubenswrapper[4823]: I0121 17:36:37.736001 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4dtxc" event={"ID":"1af0664c-c80c-402b-bdd7-7e7fd1e5711e","Type":"ContainerDied","Data":"b7fc184dd324461367e40300fa9867eb716f86d2dcbb05353ef14f4885d4ea66"} Jan 21 17:36:37 crc kubenswrapper[4823]: I0121 17:36:37.736043 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7fc184dd324461367e40300fa9867eb716f86d2dcbb05353ef14f4885d4ea66" Jan 21 17:36:37 crc kubenswrapper[4823]: E0121 17:36:37.736643 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 17:36:37 crc kubenswrapper[4823]: E0121 17:36:37.737443 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc8h568h579hfch594h7fh5f8h554h645h7h5f6hcbh59bh5c8h54dh546h8fh564h598h689h685hdbhb8h545h8h656h55chf4h5dh84h9ch65q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjrv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-969658cd5-27ggm_openstack(6cd14b37-c130-44e0-b0f3-3508ea1b4e54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:36:37 crc kubenswrapper[4823]: E0121 17:36:37.744799 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-969658cd5-27ggm" podUID="6cd14b37-c130-44e0-b0f3-3508ea1b4e54" Jan 21 17:36:37 crc kubenswrapper[4823]: I0121 17:36:37.857009 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.025557 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncpf2\" (UniqueName: \"kubernetes.io/projected/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-kube-api-access-ncpf2\") pod \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.025619 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-config-data\") pod \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.025660 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-fernet-keys\") pod \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.025683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-combined-ca-bundle\") pod \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.025791 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-scripts\") pod \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.025913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-credential-keys\") pod \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\" (UID: \"1af0664c-c80c-402b-bdd7-7e7fd1e5711e\") " Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.034036 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1af0664c-c80c-402b-bdd7-7e7fd1e5711e" (UID: "1af0664c-c80c-402b-bdd7-7e7fd1e5711e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.044251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-scripts" (OuterVolumeSpecName: "scripts") pod "1af0664c-c80c-402b-bdd7-7e7fd1e5711e" (UID: "1af0664c-c80c-402b-bdd7-7e7fd1e5711e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.044530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-kube-api-access-ncpf2" (OuterVolumeSpecName: "kube-api-access-ncpf2") pod "1af0664c-c80c-402b-bdd7-7e7fd1e5711e" (UID: "1af0664c-c80c-402b-bdd7-7e7fd1e5711e"). InnerVolumeSpecName "kube-api-access-ncpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.079340 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1af0664c-c80c-402b-bdd7-7e7fd1e5711e" (UID: "1af0664c-c80c-402b-bdd7-7e7fd1e5711e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.079424 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1af0664c-c80c-402b-bdd7-7e7fd1e5711e" (UID: "1af0664c-c80c-402b-bdd7-7e7fd1e5711e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.079559 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-config-data" (OuterVolumeSpecName: "config-data") pod "1af0664c-c80c-402b-bdd7-7e7fd1e5711e" (UID: "1af0664c-c80c-402b-bdd7-7e7fd1e5711e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.129113 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncpf2\" (UniqueName: \"kubernetes.io/projected/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-kube-api-access-ncpf2\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.129153 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.129165 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.129176 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.129187 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.129197 4823 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af0664c-c80c-402b-bdd7-7e7fd1e5711e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.419827 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bddd7dd6-67b6t"] Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.746257 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4dtxc" Jan 21 17:36:38 crc kubenswrapper[4823]: I0121 17:36:38.986896 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4dtxc"] Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.005608 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4dtxc"] Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.067056 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j5shb"] Jan 21 17:36:39 crc kubenswrapper[4823]: E0121 17:36:39.067734 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af0664c-c80c-402b-bdd7-7e7fd1e5711e" containerName="keystone-bootstrap" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.067758 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af0664c-c80c-402b-bdd7-7e7fd1e5711e" containerName="keystone-bootstrap" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.068024 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af0664c-c80c-402b-bdd7-7e7fd1e5711e" containerName="keystone-bootstrap" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.069033 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.077438 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.077585 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.077881 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jlj99" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.077966 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j5shb"] Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.078100 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.078101 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.262520 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-scripts\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.263401 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-config-data\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.263593 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-credential-keys\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.263782 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvl9\" (UniqueName: \"kubernetes.io/projected/7b8b53cb-9154-460c-90ff-72c9987cd31c-kube-api-access-drvl9\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.263938 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-fernet-keys\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.264054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-combined-ca-bundle\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.367316 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvl9\" (UniqueName: \"kubernetes.io/projected/7b8b53cb-9154-460c-90ff-72c9987cd31c-kube-api-access-drvl9\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.372773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-fernet-keys\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.372978 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-combined-ca-bundle\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.373202 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-scripts\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.373294 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-config-data\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.373512 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-credential-keys\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.377319 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.377374 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.377659 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.381545 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af0664c-c80c-402b-bdd7-7e7fd1e5711e" path="/var/lib/kubelet/pods/1af0664c-c80c-402b-bdd7-7e7fd1e5711e/volumes" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.385828 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-combined-ca-bundle\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.389996 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-credential-keys\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.390048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-scripts\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.390677 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-fernet-keys\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.402969 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-config-data\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.403319 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvl9\" (UniqueName: \"kubernetes.io/projected/7b8b53cb-9154-460c-90ff-72c9987cd31c-kube-api-access-drvl9\") pod \"keystone-bootstrap-j5shb\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.416925 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jlj99" Jan 21 17:36:39 crc kubenswrapper[4823]: I0121 17:36:39.424320 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:36:41 crc kubenswrapper[4823]: I0121 17:36:41.483604 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 17:36:41 crc kubenswrapper[4823]: I0121 17:36:41.484101 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:45 crc kubenswrapper[4823]: I0121 17:36:45.587289 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 17:36:45 crc kubenswrapper[4823]: I0121 17:36:45.587817 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 17:36:46 crc kubenswrapper[4823]: I0121 17:36:46.189075 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:46 crc kubenswrapper[4823]: I0121 17:36:46.189185 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:46 crc kubenswrapper[4823]: I0121 17:36:46.482708 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.140473 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.276693 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.282576 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.288897 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.304151 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367654 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-httpd-run\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367736 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-scripts\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367770 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/527e03e8-f94d-4d8a-b04f-e867350c6f32-horizon-secret-key\") pod \"527e03e8-f94d-4d8a-b04f-e867350c6f32\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-logs\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367888 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-config-data\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367924 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rznws\" (UniqueName: \"kubernetes.io/projected/31345bdc-736a-4c87-b36d-ebb7e0162789-kube-api-access-rznws\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367947 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-config-data\") pod \"41a34226-6809-48a6-be65-12d1c025ef32\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.367990 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-combined-ca-bundle\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368019 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a34226-6809-48a6-be65-12d1c025ef32-logs\") pod \"41a34226-6809-48a6-be65-12d1c025ef32\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368057 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfbnf\" (UniqueName: \"kubernetes.io/projected/b01d19d3-e6c2-4083-82bb-4927e4325302-kube-api-access-vfbnf\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368125 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368160 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-config-data\") pod \"527e03e8-f94d-4d8a-b04f-e867350c6f32\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368190 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527e03e8-f94d-4d8a-b04f-e867350c6f32-logs\") pod \"527e03e8-f94d-4d8a-b04f-e867350c6f32\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9tsg\" (UniqueName: \"kubernetes.io/projected/527e03e8-f94d-4d8a-b04f-e867350c6f32-kube-api-access-w9tsg\") pod \"527e03e8-f94d-4d8a-b04f-e867350c6f32\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368267 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-combined-ca-bundle\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-scripts\") pod \"527e03e8-f94d-4d8a-b04f-e867350c6f32\" (UID: \"527e03e8-f94d-4d8a-b04f-e867350c6f32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368368 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-scripts\") pod \"41a34226-6809-48a6-be65-12d1c025ef32\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgtg\" (UniqueName: \"kubernetes.io/projected/41a34226-6809-48a6-be65-12d1c025ef32-kube-api-access-fxgtg\") pod \"41a34226-6809-48a6-be65-12d1c025ef32\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368429 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-logs\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-config-data\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368502 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a34226-6809-48a6-be65-12d1c025ef32-horizon-secret-key\") pod \"41a34226-6809-48a6-be65-12d1c025ef32\" (UID: \"41a34226-6809-48a6-be65-12d1c025ef32\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-scripts\") pod \"b01d19d3-e6c2-4083-82bb-4927e4325302\" (UID: \"b01d19d3-e6c2-4083-82bb-4927e4325302\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368565 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-httpd-run\") pod \"31345bdc-736a-4c87-b36d-ebb7e0162789\" (UID: \"31345bdc-736a-4c87-b36d-ebb7e0162789\") " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.368784 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a34226-6809-48a6-be65-12d1c025ef32-logs" (OuterVolumeSpecName: "logs") pod "41a34226-6809-48a6-be65-12d1c025ef32" (UID: "41a34226-6809-48a6-be65-12d1c025ef32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.369308 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.369321 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a34226-6809-48a6-be65-12d1c025ef32-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.369809 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-config-data" (OuterVolumeSpecName: "config-data") pod "41a34226-6809-48a6-be65-12d1c025ef32" (UID: "41a34226-6809-48a6-be65-12d1c025ef32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.370251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-scripts" (OuterVolumeSpecName: "scripts") pod "41a34226-6809-48a6-be65-12d1c025ef32" (UID: "41a34226-6809-48a6-be65-12d1c025ef32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.370916 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-scripts" (OuterVolumeSpecName: "scripts") pod "527e03e8-f94d-4d8a-b04f-e867350c6f32" (UID: "527e03e8-f94d-4d8a-b04f-e867350c6f32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.371064 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-config-data" (OuterVolumeSpecName: "config-data") pod "527e03e8-f94d-4d8a-b04f-e867350c6f32" (UID: "527e03e8-f94d-4d8a-b04f-e867350c6f32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.371245 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527e03e8-f94d-4d8a-b04f-e867350c6f32-logs" (OuterVolumeSpecName: "logs") pod "527e03e8-f94d-4d8a-b04f-e867350c6f32" (UID: "527e03e8-f94d-4d8a-b04f-e867350c6f32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.376226 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527e03e8-f94d-4d8a-b04f-e867350c6f32-kube-api-access-w9tsg" (OuterVolumeSpecName: "kube-api-access-w9tsg") pod "527e03e8-f94d-4d8a-b04f-e867350c6f32" (UID: "527e03e8-f94d-4d8a-b04f-e867350c6f32"). InnerVolumeSpecName "kube-api-access-w9tsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.382788 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01d19d3-e6c2-4083-82bb-4927e4325302-kube-api-access-vfbnf" (OuterVolumeSpecName: "kube-api-access-vfbnf") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "kube-api-access-vfbnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.383774 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31345bdc-736a-4c87-b36d-ebb7e0162789-kube-api-access-rznws" (OuterVolumeSpecName: "kube-api-access-rznws") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "kube-api-access-rznws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.384675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-logs" (OuterVolumeSpecName: "logs") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.385970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-logs" (OuterVolumeSpecName: "logs") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.386744 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.389460 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-scripts" (OuterVolumeSpecName: "scripts") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.389806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527e03e8-f94d-4d8a-b04f-e867350c6f32-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "527e03e8-f94d-4d8a-b04f-e867350c6f32" (UID: "527e03e8-f94d-4d8a-b04f-e867350c6f32"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.390310 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.392449 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.398389 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a34226-6809-48a6-be65-12d1c025ef32-kube-api-access-fxgtg" (OuterVolumeSpecName: "kube-api-access-fxgtg") pod "41a34226-6809-48a6-be65-12d1c025ef32" (UID: "41a34226-6809-48a6-be65-12d1c025ef32"). InnerVolumeSpecName "kube-api-access-fxgtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.403321 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a34226-6809-48a6-be65-12d1c025ef32-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "41a34226-6809-48a6-be65-12d1c025ef32" (UID: "41a34226-6809-48a6-be65-12d1c025ef32"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.404455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-scripts" (OuterVolumeSpecName: "scripts") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.422306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.423305 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.441182 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-config-data" (OuterVolumeSpecName: "config-data") pod "31345bdc-736a-4c87-b36d-ebb7e0162789" (UID: "31345bdc-736a-4c87-b36d-ebb7e0162789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.456711 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-config-data" (OuterVolumeSpecName: "config-data") pod "b01d19d3-e6c2-4083-82bb-4927e4325302" (UID: "b01d19d3-e6c2-4083-82bb-4927e4325302"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.470801 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471001 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471016 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527e03e8-f94d-4d8a-b04f-e867350c6f32-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471030 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9tsg\" (UniqueName: \"kubernetes.io/projected/527e03e8-f94d-4d8a-b04f-e867350c6f32-kube-api-access-w9tsg\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471045 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471057 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527e03e8-f94d-4d8a-b04f-e867350c6f32-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471067 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471080 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgtg\" (UniqueName: \"kubernetes.io/projected/41a34226-6809-48a6-be65-12d1c025ef32-kube-api-access-fxgtg\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471090 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471100 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471112 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41a34226-6809-48a6-be65-12d1c025ef32-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471123 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01d19d3-e6c2-4083-82bb-4927e4325302-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471133 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471144 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b01d19d3-e6c2-4083-82bb-4927e4325302-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471156 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471167 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/527e03e8-f94d-4d8a-b04f-e867350c6f32-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471177 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31345bdc-736a-4c87-b36d-ebb7e0162789-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471189 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471201 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41a34226-6809-48a6-be65-12d1c025ef32-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471214 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rznws\" (UniqueName: \"kubernetes.io/projected/31345bdc-736a-4c87-b36d-ebb7e0162789-kube-api-access-rznws\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471226 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31345bdc-736a-4c87-b36d-ebb7e0162789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471254 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.471269 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfbnf\" (UniqueName: \"kubernetes.io/projected/b01d19d3-e6c2-4083-82bb-4927e4325302-kube-api-access-vfbnf\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.489736 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.491434 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.573724 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.573763 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:47 crc kubenswrapper[4823]: E0121 17:36:47.681966 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 21 17:36:47 crc kubenswrapper[4823]: E0121 17:36:47.682185 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf6h578h64h587h57bhb8h5f5h55ch5b7h7fh54dh695h586h55fh67ch76hddh5c4h594h5ch689h5fh5fdh695hdh6fh57chb6h5b9h64fh586h584q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsjgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6697a997-d4df-46c4-8520-8d23c6203f87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.865579 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59549fbb65-j6m8v" event={"ID":"41a34226-6809-48a6-be65-12d1c025ef32","Type":"ContainerDied","Data":"255aa2c04ca3f67d118d94a8b79f25deef33ed237a506c9bf408efbee1add51b"} Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.865621 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59549fbb65-j6m8v" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.867375 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6455c87555-hpzsh" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.867401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6455c87555-hpzsh" event={"ID":"527e03e8-f94d-4d8a-b04f-e867350c6f32","Type":"ContainerDied","Data":"fe676c8f2a6933b4daaa10f85a44b3bc0b3d38c2c6a14afbc879c7887f3e45fd"} Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.869706 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.869697 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b01d19d3-e6c2-4083-82bb-4927e4325302","Type":"ContainerDied","Data":"2f154fdb9ab7d7b47b4b52026469657f8af74153f94481b1aa2edc7e0ed5673a"} Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.869916 4823 scope.go:117] "RemoveContainer" containerID="47ea7e6cce0ea3ab8371b174abd53673cac810c2e4f7c64bf6ac4939e43d7dab" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.878164 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31345bdc-736a-4c87-b36d-ebb7e0162789","Type":"ContainerDied","Data":"14e95015d4f20953e4529882a5ab1396de360871a6b76696aa539decdaa3c5a9"} Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.878230 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.956061 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6455c87555-hpzsh"] Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.974024 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6455c87555-hpzsh"] Jan 21 17:36:47 crc kubenswrapper[4823]: I0121 17:36:47.996191 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.014487 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.025469 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.044050 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.082405 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.082773 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-log" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.082790 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-log" Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.082808 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-log" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.082815 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-log" Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.082846 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-httpd" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.082866 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-httpd" Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.082876 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-httpd" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.082883 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-httpd" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.083049 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-httpd" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.083076 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" containerName="glance-log" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.083088 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-httpd" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.083114 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" containerName="glance-log" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.084138 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.091962 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.093079 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x59xs" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.093161 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.093170 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.117117 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.120581 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.123523 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.123747 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.129276 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.156922 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.182175 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59549fbb65-j6m8v"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.194241 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59549fbb65-j6m8v"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.290727 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6fc7\" (UniqueName: \"kubernetes.io/projected/21eb6cab-7de3-4826-9d12-33a1b7e13a13-kube-api-access-b6fc7\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.290795 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.290870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-scripts\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291070 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291138 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-logs\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291332 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291413 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-logs\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291599 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-config-data\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291633 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcpp\" (UniqueName: \"kubernetes.io/projected/9f3f6704-aa00-4387-9410-564e0cf95d93-kube-api-access-kmcpp\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291748 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291791 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.291873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.292041 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.393749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.393806 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.393824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.393847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-logs\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.393972 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-config-data\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.393993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394015 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcpp\" (UniqueName: \"kubernetes.io/projected/9f3f6704-aa00-4387-9410-564e0cf95d93-kube-api-access-kmcpp\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394089 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394125 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394146 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.394650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-logs\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.395000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.395446 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.396231 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6fc7\" (UniqueName: \"kubernetes.io/projected/21eb6cab-7de3-4826-9d12-33a1b7e13a13-kube-api-access-b6fc7\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.396311 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.396427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-scripts\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.396474 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.396513 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-logs\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.397185 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.397200 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-logs\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.402269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-scripts\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.402829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.402975 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.403114 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-config-data\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.405561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.405819 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.417818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.419911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcpp\" (UniqueName: \"kubernetes.io/projected/9f3f6704-aa00-4387-9410-564e0cf95d93-kube-api-access-kmcpp\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.421259 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6fc7\" (UniqueName: \"kubernetes.io/projected/21eb6cab-7de3-4826-9d12-33a1b7e13a13-kube-api-access-b6fc7\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.423767 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.447799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.451033 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.545729 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.546023 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdqqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vh8wn_openstack(c29754ad-e324-474f-a0df-d450b9152aa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.548193 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vh8wn" podUID="c29754ad-e324-474f-a0df-d450b9152aa3" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.700901 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.714237 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.718841 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.734754 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.747380 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.810641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.810730 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjp5w\" (UniqueName: \"kubernetes.io/projected/00ca24fe-07a5-4b20-a18d-5f62c510030a-kube-api-access-bjp5w\") pod \"00ca24fe-07a5-4b20-a18d-5f62c510030a\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.810803 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-web-config\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.810876 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-1\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.810911 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-config\") pod \"00ca24fe-07a5-4b20-a18d-5f62c510030a\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.810945 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qshlk\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-kube-api-access-qshlk\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.811017 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-nb\") pod \"00ca24fe-07a5-4b20-a18d-5f62c510030a\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.811052 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-0\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.812709 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.812730 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813035 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-thanos-prometheus-http-client-file\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813151 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-2\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813188 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config-out\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813208 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-tls-assets\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-dns-svc\") pod \"00ca24fe-07a5-4b20-a18d-5f62c510030a\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813315 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config\") pod \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\" (UID: \"9c7540d6-15ae-4931-b89b-2c9c0429b86a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.813347 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-sb\") pod \"00ca24fe-07a5-4b20-a18d-5f62c510030a\" (UID: \"00ca24fe-07a5-4b20-a18d-5f62c510030a\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.814106 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.814129 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.815056 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.820227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-kube-api-access-qshlk" (OuterVolumeSpecName: "kube-api-access-qshlk") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "kube-api-access-qshlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.829458 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config-out" (OuterVolumeSpecName: "config-out") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.829994 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ca24fe-07a5-4b20-a18d-5f62c510030a-kube-api-access-bjp5w" (OuterVolumeSpecName: "kube-api-access-bjp5w") pod "00ca24fe-07a5-4b20-a18d-5f62c510030a" (UID: "00ca24fe-07a5-4b20-a18d-5f62c510030a"). InnerVolumeSpecName "kube-api-access-bjp5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.837956 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config" (OuterVolumeSpecName: "config") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.838236 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.839411 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.846417 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.859142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-web-config" (OuterVolumeSpecName: "web-config") pod "9c7540d6-15ae-4931-b89b-2c9c0429b86a" (UID: "9c7540d6-15ae-4931-b89b-2c9c0429b86a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.870129 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00ca24fe-07a5-4b20-a18d-5f62c510030a" (UID: "00ca24fe-07a5-4b20-a18d-5f62c510030a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.877465 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00ca24fe-07a5-4b20-a18d-5f62c510030a" (UID: "00ca24fe-07a5-4b20-a18d-5f62c510030a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.888429 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-config" (OuterVolumeSpecName: "config") pod "00ca24fe-07a5-4b20-a18d-5f62c510030a" (UID: "00ca24fe-07a5-4b20-a18d-5f62c510030a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.893594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" event={"ID":"00ca24fe-07a5-4b20-a18d-5f62c510030a","Type":"ContainerDied","Data":"9158637442d98d07bfddce95b8690e5fc9d3479435c146aec1b9fc70aaf6ef27"} Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.893651 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.895090 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bddd7dd6-67b6t" event={"ID":"709299a1-f499-447b-a738-efe1b32c7abf","Type":"ContainerStarted","Data":"33f1d196b665591348a8c5a89533df6e54e0394f90488a033e3449502cdd5f35"} Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.899272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c7540d6-15ae-4931-b89b-2c9c0429b86a","Type":"ContainerDied","Data":"36f62ad5e0ae2c734160fd87c21988ff446e8c242a03e27ee000ed2bf19521af"} Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.899319 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.902643 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-969658cd5-27ggm" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.902777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-969658cd5-27ggm" event={"ID":"6cd14b37-c130-44e0-b0f3-3508ea1b4e54","Type":"ContainerDied","Data":"8957438aec79b760904701aef49b7415b3f6b2266f11184a419eebd9cd67dbb6"} Jan 21 17:36:48 crc kubenswrapper[4823]: E0121 17:36:48.903798 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vh8wn" podUID="c29754ad-e324-474f-a0df-d450b9152aa3" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.904206 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00ca24fe-07a5-4b20-a18d-5f62c510030a" (UID: "00ca24fe-07a5-4b20-a18d-5f62c510030a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.915469 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjrv6\" (UniqueName: \"kubernetes.io/projected/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-kube-api-access-fjrv6\") pod \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.915583 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-logs\") pod \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.915620 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-horizon-secret-key\") pod \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.915789 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-config-data\") pod \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.915820 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-scripts\") pod \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\" (UID: \"6cd14b37-c130-44e0-b0f3-3508ea1b4e54\") " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.915994 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-logs" (OuterVolumeSpecName: "logs") pod "6cd14b37-c130-44e0-b0f3-3508ea1b4e54" (UID: "6cd14b37-c130-44e0-b0f3-3508ea1b4e54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916391 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916412 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qshlk\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-kube-api-access-qshlk\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916424 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916439 4823 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916451 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916465 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9c7540d6-15ae-4931-b89b-2c9c0429b86a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916477 4823 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916488 4823 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c7540d6-15ae-4931-b89b-2c9c0429b86a-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916499 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916510 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916520 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00ca24fe-07a5-4b20-a18d-5f62c510030a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916550 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") on node \"crc\" " Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916564 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjp5w\" (UniqueName: \"kubernetes.io/projected/00ca24fe-07a5-4b20-a18d-5f62c510030a-kube-api-access-bjp5w\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916576 4823 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c7540d6-15ae-4931-b89b-2c9c0429b86a-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.916727 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-config-data" (OuterVolumeSpecName: "config-data") pod "6cd14b37-c130-44e0-b0f3-3508ea1b4e54" (UID: "6cd14b37-c130-44e0-b0f3-3508ea1b4e54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.917297 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-scripts" (OuterVolumeSpecName: "scripts") pod "6cd14b37-c130-44e0-b0f3-3508ea1b4e54" (UID: "6cd14b37-c130-44e0-b0f3-3508ea1b4e54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.925751 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6cd14b37-c130-44e0-b0f3-3508ea1b4e54" (UID: "6cd14b37-c130-44e0-b0f3-3508ea1b4e54"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.944879 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-kube-api-access-fjrv6" (OuterVolumeSpecName: "kube-api-access-fjrv6") pod "6cd14b37-c130-44e0-b0f3-3508ea1b4e54" (UID: "6cd14b37-c130-44e0-b0f3-3508ea1b4e54"). InnerVolumeSpecName "kube-api-access-fjrv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.949891 4823 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.950022 4823 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158") on node "crc" Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.966033 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:36:48 crc kubenswrapper[4823]: I0121 17:36:48.999125 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.013940 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:36:49 crc kubenswrapper[4823]: E0121 17:36:49.014463 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="thanos-sidecar" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.014483 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="thanos-sidecar" Jan 21 17:36:49 crc kubenswrapper[4823]: E0121 17:36:49.014498 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="init-config-reloader" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.014509 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="init-config-reloader" Jan 21 17:36:49 crc kubenswrapper[4823]: E0121 17:36:49.014546 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="init" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.014555 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="init" Jan 21 17:36:49 crc kubenswrapper[4823]: E0121 17:36:49.014568 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="config-reloader" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.014575 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="config-reloader" Jan 21 17:36:49 crc kubenswrapper[4823]: E0121 17:36:49.014609 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.014615 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" Jan 21 17:36:49 crc kubenswrapper[4823]: E0121 17:36:49.014624 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.014630 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.016926 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="thanos-sidecar" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.016999 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.017051 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="config-reloader" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.017083 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.026725 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.026777 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.026794 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.026805 4823 reconciler_common.go:293] "Volume detached for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.026826 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjrv6\" (UniqueName: \"kubernetes.io/projected/6cd14b37-c130-44e0-b0f3-3508ea1b4e54-kube-api-access-fjrv6\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.065161 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.065450 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.078303 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079140 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079248 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079255 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079297 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079327 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079318 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.079446 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hndk8" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.083329 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64bdb4885b-ddprk"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.237287 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.237342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.237440 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.237517 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.237556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.238180 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4tqlq"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.237623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246115 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whsj\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-kube-api-access-5whsj\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246385 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246566 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246698 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.246795 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.250386 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4tqlq"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.279869 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-969658cd5-27ggm"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.288810 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-969658cd5-27ggm"] Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349266 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349346 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349431 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349521 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349606 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whsj\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-kube-api-access-5whsj\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.349752 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.350139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.350386 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.351036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.354049 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.354626 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.354887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.356340 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.356441 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3bf7d8e6071accb44cb216af941703c855ece813d1c3a48f9936e31f1ede18e7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.357646 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.358348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.359956 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.363798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.363883 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.370971 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whsj\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-kube-api-access-5whsj\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.377480 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" path="/var/lib/kubelet/pods/00ca24fe-07a5-4b20-a18d-5f62c510030a/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.379121 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31345bdc-736a-4c87-b36d-ebb7e0162789" path="/var/lib/kubelet/pods/31345bdc-736a-4c87-b36d-ebb7e0162789/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.380037 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a34226-6809-48a6-be65-12d1c025ef32" path="/var/lib/kubelet/pods/41a34226-6809-48a6-be65-12d1c025ef32/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.381380 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527e03e8-f94d-4d8a-b04f-e867350c6f32" path="/var/lib/kubelet/pods/527e03e8-f94d-4d8a-b04f-e867350c6f32/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.382223 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd14b37-c130-44e0-b0f3-3508ea1b4e54" path="/var/lib/kubelet/pods/6cd14b37-c130-44e0-b0f3-3508ea1b4e54/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.383185 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" path="/var/lib/kubelet/pods/9c7540d6-15ae-4931-b89b-2c9c0429b86a/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.385095 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01d19d3-e6c2-4083-82bb-4927e4325302" path="/var/lib/kubelet/pods/b01d19d3-e6c2-4083-82bb-4927e4325302/volumes" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.400132 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.698391 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.911166 4823 generic.go:334] "Generic (PLEG): container finished" podID="4b459c03-94c5-43e1-bda8-b7e174f3830c" containerID="96ad8c386975f7c37f9f67d9d8d2cb614ed8ee0e1c8b48791d39efc14e843b92" exitCode=0 Jan 21 17:36:49 crc kubenswrapper[4823]: I0121 17:36:49.911227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6l2l8" event={"ID":"4b459c03-94c5-43e1-bda8-b7e174f3830c","Type":"ContainerDied","Data":"96ad8c386975f7c37f9f67d9d8d2cb614ed8ee0e1c8b48791d39efc14e843b92"} Jan 21 17:36:50 crc kubenswrapper[4823]: I0121 17:36:50.709547 4823 scope.go:117] "RemoveContainer" containerID="2bf3c7e21b24bc95d05dc7bc3e739fd60d4d32c1835625a19bee46ec020fc582" Jan 21 17:36:50 crc kubenswrapper[4823]: I0121 17:36:50.799015 4823 scope.go:117] "RemoveContainer" containerID="0adf20c8b4cf5f4e67a21a6d6b6d96e356f39461463592cb78b63269d2ea40e4" Jan 21 17:36:50 crc kubenswrapper[4823]: I0121 17:36:50.918455 4823 scope.go:117] "RemoveContainer" containerID="2b99d49c7f921eb81d72298c94a5b90ae8e22c642161d1a1961f0b2eac476828" Jan 21 17:36:50 crc kubenswrapper[4823]: E0121 17:36:50.920151 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 17:36:50 crc kubenswrapper[4823]: E0121 17:36:50.920449 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gkfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fr4qp_openstack(2d118987-76ea-46aa-9989-274e87e36d3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:36:50 crc kubenswrapper[4823]: E0121 17:36:50.922309 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fr4qp" podUID="2d118987-76ea-46aa-9989-274e87e36d3a" Jan 21 17:36:50 crc kubenswrapper[4823]: I0121 17:36:50.965057 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64bdb4885b-ddprk" event={"ID":"7890c9eb-67a6-4c41-af5b-c57f0fddc533","Type":"ContainerStarted","Data":"9cc57cb183057b6b3f399509fb7b9eba9b5df1c0157b1c131a8441959ec60e59"} Jan 21 17:36:50 crc kubenswrapper[4823]: I0121 17:36:50.974335 4823 scope.go:117] "RemoveContainer" containerID="00e4a785cada32a80be87f9e5206df3ac28a9ced6bc0db7f7c58110d900b102f" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.042579 4823 scope.go:117] "RemoveContainer" containerID="1c512bc6b034e7740f216daf16223dd5d15e25b5efd7381b139594befcae59d4" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.130739 4823 scope.go:117] "RemoveContainer" containerID="e3c7bce20716bbc4d63196a0b4b22eea62cf5e1508f90b88acb048019b2705b9" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.156083 4823 scope.go:117] "RemoveContainer" containerID="0b9c1ca3c3f00c2fea0933ecc1a234742c5be85950363e795331c7103ea267f4" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.184458 4823 scope.go:117] "RemoveContainer" containerID="583a0e2faa3deb9f5f136fb8e74f7df8738130af6fe90f46666e39764b49e975" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.223768 4823 scope.go:117] "RemoveContainer" containerID="65c57cc2faa2053e3831057252afd0bd6a79afe4474e02a62ed8e48d75eeb9bb" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.258101 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:36:51 crc kubenswrapper[4823]: W0121 17:36:51.259131 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod639e3107_061d_4225_9344_7f2e0a3099b8.slice/crio-4c6006200738b4996e2ea020c68ac2549f8ac578d9f0900a278db113e5de9a22 WatchSource:0}: Error finding container 4c6006200738b4996e2ea020c68ac2549f8ac578d9f0900a278db113e5de9a22: Status 404 returned error can't find the container with id 4c6006200738b4996e2ea020c68ac2549f8ac578d9f0900a278db113e5de9a22 Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.425302 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.435726 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.482916 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9c7540d6-15ae-4931-b89b-2c9c0429b86a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.578547 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j5shb"] Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.655516 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.730252 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.780126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:36:51 crc kubenswrapper[4823]: I0121 17:36:51.793786 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.029293 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qnhv5" event={"ID":"a7924f2b-6db5-4473-ae49-91c0d32fa817","Type":"ContainerStarted","Data":"7450e92d8570cca61abdb5f2fdf552a81b8e93450b302712d25c883f417f62db"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.032568 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d6cbd234-b542-49e1-bd22-bb4b307b2f7f","Type":"ContainerStarted","Data":"2d286be18c8e8022776e3850af671a3373d1731b393233d3a4d5bb5a580a2772"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.035744 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"639e3107-061d-4225-9344-7f2e0a3099b8","Type":"ContainerStarted","Data":"4c6006200738b4996e2ea020c68ac2549f8ac578d9f0900a278db113e5de9a22"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.044408 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.050424 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21eb6cab-7de3-4826-9d12-33a1b7e13a13","Type":"ContainerStarted","Data":"8668939d7b2fba38c241250746c4460836a5452a094892dd07ca43e3cf4ced36"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.052457 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qnhv5" podStartSLOduration=3.564251922 podStartE2EDuration="43.052399876s" podCreationTimestamp="2026-01-21 17:36:09 +0000 UTC" firstStartedPulling="2026-01-21 17:36:11.134667606 +0000 UTC m=+1172.060798466" lastFinishedPulling="2026-01-21 17:36:50.62281556 +0000 UTC m=+1211.548946420" observedRunningTime="2026-01-21 17:36:52.050900479 +0000 UTC m=+1212.977031339" watchObservedRunningTime="2026-01-21 17:36:52.052399876 +0000 UTC m=+1212.978530736" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.053265 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j5shb" event={"ID":"7b8b53cb-9154-460c-90ff-72c9987cd31c","Type":"ContainerStarted","Data":"cd6edb13a1119b95e3ae578541d8f2d2aafcf8a266fc6a7431aa83700bc8e985"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.119693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9f3f6704-aa00-4387-9410-564e0cf95d93","Type":"ContainerStarted","Data":"dbecf7833200736ba035da758d1666031a6e8e36ec2f701076e5fd830d698d9a"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.128496 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerStarted","Data":"b9245ea8bfe003013eda0999cf69d1573d774686b84b0499b077f1ef0c0f609f"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.131726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e","Type":"ContainerStarted","Data":"8e3c57910e05c4646de206aee8da91b4c6d7bbae426c289350b6d858e868b54c"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.135012 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6l2l8" event={"ID":"4b459c03-94c5-43e1-bda8-b7e174f3830c","Type":"ContainerDied","Data":"6cfc0fce31f4741f77fcc1c8aa5a8de2f521c90e0047ea355d3f61e5dbf91bf3"} Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.135046 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cfc0fce31f4741f77fcc1c8aa5a8de2f521c90e0047ea355d3f61e5dbf91bf3" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.135103 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6l2l8" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.141275 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-4tqlq" podUID="00ca24fe-07a5-4b20-a18d-5f62c510030a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Jan 21 17:36:52 crc kubenswrapper[4823]: E0121 17:36:52.164009 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-fr4qp" podUID="2d118987-76ea-46aa-9989-274e87e36d3a" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.227567 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-combined-ca-bundle\") pod \"4b459c03-94c5-43e1-bda8-b7e174f3830c\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.227822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-config\") pod \"4b459c03-94c5-43e1-bda8-b7e174f3830c\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.227947 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfbb\" (UniqueName: \"kubernetes.io/projected/4b459c03-94c5-43e1-bda8-b7e174f3830c-kube-api-access-wsfbb\") pod \"4b459c03-94c5-43e1-bda8-b7e174f3830c\" (UID: \"4b459c03-94c5-43e1-bda8-b7e174f3830c\") " Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.238435 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b459c03-94c5-43e1-bda8-b7e174f3830c-kube-api-access-wsfbb" (OuterVolumeSpecName: "kube-api-access-wsfbb") pod "4b459c03-94c5-43e1-bda8-b7e174f3830c" (UID: "4b459c03-94c5-43e1-bda8-b7e174f3830c"). InnerVolumeSpecName "kube-api-access-wsfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.272958 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b459c03-94c5-43e1-bda8-b7e174f3830c" (UID: "4b459c03-94c5-43e1-bda8-b7e174f3830c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.285416 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-config" (OuterVolumeSpecName: "config") pod "4b459c03-94c5-43e1-bda8-b7e174f3830c" (UID: "4b459c03-94c5-43e1-bda8-b7e174f3830c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.331342 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.332037 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfbb\" (UniqueName: \"kubernetes.io/projected/4b459c03-94c5-43e1-bda8-b7e174f3830c-kube-api-access-wsfbb\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:52 crc kubenswrapper[4823]: I0121 17:36:52.332068 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b459c03-94c5-43e1-bda8-b7e174f3830c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.176598 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bddd7dd6-67b6t" event={"ID":"709299a1-f499-447b-a738-efe1b32c7abf","Type":"ContainerStarted","Data":"44d5a2f2da90ef2703441199fc8ce5e3f4fdd42c9730a24f75ee5a11121068b6"} Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.178197 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21eb6cab-7de3-4826-9d12-33a1b7e13a13","Type":"ContainerStarted","Data":"a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf"} Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.179492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j5shb" event={"ID":"7b8b53cb-9154-460c-90ff-72c9987cd31c","Type":"ContainerStarted","Data":"bd1f7a4d9d083eadccae0b7d71781fcdfd8b0069dcfee4ccf299729784a40c95"} Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.181168 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9f3f6704-aa00-4387-9410-564e0cf95d93","Type":"ContainerStarted","Data":"7b680b277188a67080df784b710b84edfecab7659e5fd84706112f335574955f"} Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.182416 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d6cbd234-b542-49e1-bd22-bb4b307b2f7f","Type":"ContainerStarted","Data":"4441f1faddda2fddccf7bb37ae303b7492b3fbef94e8f0ce2a1106ac3f8f4433"} Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.207529 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j5shb" podStartSLOduration=14.207507451 podStartE2EDuration="14.207507451s" podCreationTimestamp="2026-01-21 17:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:53.200366435 +0000 UTC m=+1214.126497295" watchObservedRunningTime="2026-01-21 17:36:53.207507451 +0000 UTC m=+1214.133638311" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.414624 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r9rjd"] Jan 21 17:36:53 crc kubenswrapper[4823]: E0121 17:36:53.415666 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b459c03-94c5-43e1-bda8-b7e174f3830c" containerName="neutron-db-sync" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.415689 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b459c03-94c5-43e1-bda8-b7e174f3830c" containerName="neutron-db-sync" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.415922 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b459c03-94c5-43e1-bda8-b7e174f3830c" containerName="neutron-db-sync" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.417241 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.430194 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r9rjd"] Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.490616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.490713 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfkg\" (UniqueName: \"kubernetes.io/projected/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-kube-api-access-kpfkg\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.490757 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-config\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.490840 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-svc\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.490905 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.490932 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.570242 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54965ff674-mlg85"] Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.572816 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.576137 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5h4v2" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.576534 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.576987 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.577083 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.591094 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54965ff674-mlg85"] Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594364 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpfkg\" (UniqueName: \"kubernetes.io/projected/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-kube-api-access-kpfkg\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrgjd\" (UniqueName: \"kubernetes.io/projected/42280059-4e27-4de8-ace4-aeb184783f74-kube-api-access-lrgjd\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-config\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594500 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-httpd-config\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594565 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-config\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594596 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-svc\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-combined-ca-bundle\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594706 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-ovndb-tls-certs\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.594788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.595658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-config\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.595885 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-svc\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.596395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.598751 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.599214 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.626274 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpfkg\" (UniqueName: \"kubernetes.io/projected/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-kube-api-access-kpfkg\") pod \"dnsmasq-dns-6b7b667979-r9rjd\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.696360 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrgjd\" (UniqueName: \"kubernetes.io/projected/42280059-4e27-4de8-ace4-aeb184783f74-kube-api-access-lrgjd\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.696412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-httpd-config\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.696445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-config\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.696510 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-combined-ca-bundle\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.696563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-ovndb-tls-certs\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.702244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-ovndb-tls-certs\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.702448 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-config\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.702542 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-combined-ca-bundle\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.715328 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-httpd-config\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.720445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrgjd\" (UniqueName: \"kubernetes.io/projected/42280059-4e27-4de8-ace4-aeb184783f74-kube-api-access-lrgjd\") pod \"neutron-54965ff674-mlg85\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.764986 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:36:53 crc kubenswrapper[4823]: I0121 17:36:53.905587 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.888699 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-959c75cfc-2zd2j"] Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.891223 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.894282 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.894282 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.898060 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-959c75cfc-2zd2j"] Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.948497 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-public-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.948889 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-httpd-config\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.949098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fppn\" (UniqueName: \"kubernetes.io/projected/250ad919-e0b5-4ae5-8f77-7631bb71aba0-kube-api-access-6fppn\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.949257 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-config\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.949361 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-combined-ca-bundle\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.949730 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-ovndb-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:55 crc kubenswrapper[4823]: I0121 17:36:55.949893 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-internal-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052424 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-config\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052473 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-combined-ca-bundle\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052521 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-ovndb-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-internal-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052611 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-public-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-httpd-config\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.052672 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fppn\" (UniqueName: \"kubernetes.io/projected/250ad919-e0b5-4ae5-8f77-7631bb71aba0-kube-api-access-6fppn\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.059549 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-httpd-config\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.059783 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-config\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.060491 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-combined-ca-bundle\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.075512 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-ovndb-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.076292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-internal-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.083428 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fppn\" (UniqueName: \"kubernetes.io/projected/250ad919-e0b5-4ae5-8f77-7631bb71aba0-kube-api-access-6fppn\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.094037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ad919-e0b5-4ae5-8f77-7631bb71aba0-public-tls-certs\") pod \"neutron-959c75cfc-2zd2j\" (UID: \"250ad919-e0b5-4ae5-8f77-7631bb71aba0\") " pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.214046 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:36:56 crc kubenswrapper[4823]: I0121 17:36:56.227174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerStarted","Data":"8fea9c777a706a8c76799da43129933f5e7c503735e994e92b38106673cd1d99"} Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.096934 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-959c75cfc-2zd2j"] Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.286544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d6cbd234-b542-49e1-bd22-bb4b307b2f7f","Type":"ContainerStarted","Data":"a0e97968cdb435dc48bc4654e6d326cf446ba68f8236748e2e495dc689d4075f"} Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.288223 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.325773 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=32.325746448 podStartE2EDuration="32.325746448s" podCreationTimestamp="2026-01-21 17:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:57.311648299 +0000 UTC m=+1218.237779159" watchObservedRunningTime="2026-01-21 17:36:57.325746448 +0000 UTC m=+1218.251877318" Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.340026 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-959c75cfc-2zd2j" event={"ID":"250ad919-e0b5-4ae5-8f77-7631bb71aba0","Type":"ContainerStarted","Data":"3d1d5c6b51a74894d1af03698a05d48c77d1bdda1c4a1d524087bd93ddc774b6"} Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.432910 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66bddd7dd6-67b6t" podStartSLOduration=33.121950491 podStartE2EDuration="36.432840243s" podCreationTimestamp="2026-01-21 17:36:21 +0000 UTC" firstStartedPulling="2026-01-21 17:36:48.550777852 +0000 UTC m=+1209.476908712" lastFinishedPulling="2026-01-21 17:36:51.861667614 +0000 UTC m=+1212.787798464" observedRunningTime="2026-01-21 17:36:57.422109908 +0000 UTC m=+1218.348240968" watchObservedRunningTime="2026-01-21 17:36:57.432840243 +0000 UTC m=+1218.358971103" Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.450391 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bddd7dd6-67b6t" event={"ID":"709299a1-f499-447b-a738-efe1b32c7abf","Type":"ContainerStarted","Data":"3abba0939e32ab96372505bdcbaf313c824aac470cfbacb2e6298a426bf143da"} Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.513219 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54965ff674-mlg85"] Jan 21 17:36:57 crc kubenswrapper[4823]: W0121 17:36:57.544321 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42280059_4e27_4de8_ace4_aeb184783f74.slice/crio-d8606d0d935e6d2068274388930e8a5ffa3f63aa78638970719043dc7b31b08b WatchSource:0}: Error finding container d8606d0d935e6d2068274388930e8a5ffa3f63aa78638970719043dc7b31b08b: Status 404 returned error can't find the container with id d8606d0d935e6d2068274388930e8a5ffa3f63aa78638970719043dc7b31b08b Jan 21 17:36:57 crc kubenswrapper[4823]: I0121 17:36:57.558748 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r9rjd"] Jan 21 17:36:57 crc kubenswrapper[4823]: E0121 17:36:57.581521 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8b53cb_9154_460c_90ff_72c9987cd31c.slice/crio-bd1f7a4d9d083eadccae0b7d71781fcdfd8b0069dcfee4ccf299729784a40c95.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8b53cb_9154_460c_90ff_72c9987cd31c.slice/crio-conmon-bd1f7a4d9d083eadccae0b7d71781fcdfd8b0069dcfee4ccf299729784a40c95.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.457753 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64bdb4885b-ddprk" event={"ID":"7890c9eb-67a6-4c41-af5b-c57f0fddc533","Type":"ContainerStarted","Data":"4e46d9e6739ac4d11e1c17a1a18eb7088ea089e47013703887983e60a3eea237"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.458496 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64bdb4885b-ddprk" event={"ID":"7890c9eb-67a6-4c41-af5b-c57f0fddc533","Type":"ContainerStarted","Data":"9bdbd8dbcff4be15445e291de88aae0aaa2bf29fecc871dc52d140cbd8e40c78"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.463820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21eb6cab-7de3-4826-9d12-33a1b7e13a13","Type":"ContainerStarted","Data":"1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.465223 4823 generic.go:334] "Generic (PLEG): container finished" podID="7b8b53cb-9154-460c-90ff-72c9987cd31c" containerID="bd1f7a4d9d083eadccae0b7d71781fcdfd8b0069dcfee4ccf299729784a40c95" exitCode=0 Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.465294 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j5shb" event={"ID":"7b8b53cb-9154-460c-90ff-72c9987cd31c","Type":"ContainerDied","Data":"bd1f7a4d9d083eadccae0b7d71781fcdfd8b0069dcfee4ccf299729784a40c95"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.468176 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6697a997-d4df-46c4-8520-8d23c6203f87","Type":"ContainerStarted","Data":"b4eef1af5fcaa8c3d62477f21ed00c840d61028222a3c6a05f4b2bb375bdd081"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.473033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"639e3107-061d-4225-9344-7f2e0a3099b8","Type":"ContainerStarted","Data":"03aef47c3e5fb27c19f3f3276813a1825d29fa376ce9aaa7338ff130f2a69e61"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.514519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"624bfb5c-23b4-4da4-ba5a-15db0c47cf2e","Type":"ContainerStarted","Data":"3f1c5ad8cf4009c4eaad6cfa27fb492789be285a18dc5a55baab41b7a4b56ac2"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.527281 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=28.007704335 podStartE2EDuration="33.527248249s" podCreationTimestamp="2026-01-21 17:36:25 +0000 UTC" firstStartedPulling="2026-01-21 17:36:51.279523293 +0000 UTC m=+1212.205654153" lastFinishedPulling="2026-01-21 17:36:56.799067207 +0000 UTC m=+1217.725198067" observedRunningTime="2026-01-21 17:36:58.526743076 +0000 UTC m=+1219.452873936" watchObservedRunningTime="2026-01-21 17:36:58.527248249 +0000 UTC m=+1219.453379109" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.536399 4823 generic.go:334] "Generic (PLEG): container finished" podID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerID="f0be92cd34acaeceb2cfb696eff7aac413cb4a067ef7a42bc6b06e3eda6c3c2d" exitCode=0 Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.536539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" event={"ID":"92e4d2b6-046c-42ff-afcc-2dda2abe61cc","Type":"ContainerDied","Data":"f0be92cd34acaeceb2cfb696eff7aac413cb4a067ef7a42bc6b06e3eda6c3c2d"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.536577 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" event={"ID":"92e4d2b6-046c-42ff-afcc-2dda2abe61cc","Type":"ContainerStarted","Data":"a59c6d894b444e69356596b55a8953e4b218df33b219ba1f7fe69d2e1874c280"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.581514 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=28.577009319 podStartE2EDuration="33.581485729s" podCreationTimestamp="2026-01-21 17:36:25 +0000 UTC" firstStartedPulling="2026-01-21 17:36:51.475005942 +0000 UTC m=+1212.401136802" lastFinishedPulling="2026-01-21 17:36:56.479482352 +0000 UTC m=+1217.405613212" observedRunningTime="2026-01-21 17:36:58.573396349 +0000 UTC m=+1219.499527209" watchObservedRunningTime="2026-01-21 17:36:58.581485729 +0000 UTC m=+1219.507616589" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.582831 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-959c75cfc-2zd2j" event={"ID":"250ad919-e0b5-4ae5-8f77-7631bb71aba0","Type":"ContainerStarted","Data":"bc402bbdd8f4867590b0a8b50a815c83324668a02e6609bdd6f90641366cd671"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.590795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54965ff674-mlg85" event={"ID":"42280059-4e27-4de8-ace4-aeb184783f74","Type":"ContainerStarted","Data":"3bf82de43ab7e722fb03c673fca248493493b44727320a86384bc5782d7bf19e"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.590879 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54965ff674-mlg85" event={"ID":"42280059-4e27-4de8-ace4-aeb184783f74","Type":"ContainerStarted","Data":"d8606d0d935e6d2068274388930e8a5ffa3f63aa78638970719043dc7b31b08b"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.594205 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9f3f6704-aa00-4387-9410-564e0cf95d93","Type":"ContainerStarted","Data":"60f007dd250764f75914cb2570b765d2f1f81f90b83ba35bdbec8ec14db35769"} Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.662824 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.662792727 podStartE2EDuration="11.662792727s" podCreationTimestamp="2026-01-21 17:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:58.632232102 +0000 UTC m=+1219.558362962" watchObservedRunningTime="2026-01-21 17:36:58.662792727 +0000 UTC m=+1219.588923587" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.748891 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.749272 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.799896 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 17:36:58 crc kubenswrapper[4823]: I0121 17:36:58.889589 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 17:36:59 crc kubenswrapper[4823]: I0121 17:36:59.603670 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 17:36:59 crc kubenswrapper[4823]: I0121 17:36:59.604242 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 17:36:59 crc kubenswrapper[4823]: I0121 17:36:59.634392 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.634369419 podStartE2EDuration="12.634369419s" podCreationTimestamp="2026-01-21 17:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:36:59.631779125 +0000 UTC m=+1220.557909985" watchObservedRunningTime="2026-01-21 17:36:59.634369419 +0000 UTC m=+1220.560500279" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.210676 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.296997 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-fernet-keys\") pod \"7b8b53cb-9154-460c-90ff-72c9987cd31c\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.297054 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-combined-ca-bundle\") pod \"7b8b53cb-9154-460c-90ff-72c9987cd31c\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.297089 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvl9\" (UniqueName: \"kubernetes.io/projected/7b8b53cb-9154-460c-90ff-72c9987cd31c-kube-api-access-drvl9\") pod \"7b8b53cb-9154-460c-90ff-72c9987cd31c\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.297120 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-credential-keys\") pod \"7b8b53cb-9154-460c-90ff-72c9987cd31c\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.297214 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-scripts\") pod \"7b8b53cb-9154-460c-90ff-72c9987cd31c\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.297314 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-config-data\") pod \"7b8b53cb-9154-460c-90ff-72c9987cd31c\" (UID: \"7b8b53cb-9154-460c-90ff-72c9987cd31c\") " Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.308252 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-scripts" (OuterVolumeSpecName: "scripts") pod "7b8b53cb-9154-460c-90ff-72c9987cd31c" (UID: "7b8b53cb-9154-460c-90ff-72c9987cd31c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.310688 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7b8b53cb-9154-460c-90ff-72c9987cd31c" (UID: "7b8b53cb-9154-460c-90ff-72c9987cd31c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.311348 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8b53cb-9154-460c-90ff-72c9987cd31c-kube-api-access-drvl9" (OuterVolumeSpecName: "kube-api-access-drvl9") pod "7b8b53cb-9154-460c-90ff-72c9987cd31c" (UID: "7b8b53cb-9154-460c-90ff-72c9987cd31c"). InnerVolumeSpecName "kube-api-access-drvl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.328512 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7b8b53cb-9154-460c-90ff-72c9987cd31c" (UID: "7b8b53cb-9154-460c-90ff-72c9987cd31c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.356625 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-config-data" (OuterVolumeSpecName: "config-data") pod "7b8b53cb-9154-460c-90ff-72c9987cd31c" (UID: "7b8b53cb-9154-460c-90ff-72c9987cd31c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.364906 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8b53cb-9154-460c-90ff-72c9987cd31c" (UID: "7b8b53cb-9154-460c-90ff-72c9987cd31c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.399212 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.399249 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.399259 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.399273 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvl9\" (UniqueName: \"kubernetes.io/projected/7b8b53cb-9154-460c-90ff-72c9987cd31c-kube-api-access-drvl9\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.399283 4823 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.399292 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8b53cb-9154-460c-90ff-72c9987cd31c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.613778 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j5shb" event={"ID":"7b8b53cb-9154-460c-90ff-72c9987cd31c","Type":"ContainerDied","Data":"cd6edb13a1119b95e3ae578541d8f2d2aafcf8a266fc6a7431aa83700bc8e985"} Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.613838 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6edb13a1119b95e3ae578541d8f2d2aafcf8a266fc6a7431aa83700bc8e985" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.613890 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j5shb" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.619101 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" event={"ID":"92e4d2b6-046c-42ff-afcc-2dda2abe61cc","Type":"ContainerStarted","Data":"c69e30eb1de8ad194b9af2d7b29831da6ac5c222aa3bf25d58243d2015cd14d0"} Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.619180 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.623523 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-959c75cfc-2zd2j" event={"ID":"250ad919-e0b5-4ae5-8f77-7631bb71aba0","Type":"ContainerStarted","Data":"e3af15ca236a6972ffd353650cfc04d599eb00559ca850b24ca8149cf044b466"} Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.624528 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.628982 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54965ff674-mlg85" event={"ID":"42280059-4e27-4de8-ace4-aeb184783f74","Type":"ContainerStarted","Data":"6947b4c028998adce00bf3650a53a2d4cd2d9ac0e92047b3123fddbe9fb01e4e"} Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.629267 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.658194 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" podStartSLOduration=7.65814191 podStartE2EDuration="7.65814191s" podCreationTimestamp="2026-01-21 17:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:00.648647926 +0000 UTC m=+1221.574778786" watchObservedRunningTime="2026-01-21 17:37:00.65814191 +0000 UTC m=+1221.584272770" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.728817 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64bdb4885b-ddprk" podStartSLOduration=33.964579987 podStartE2EDuration="39.728789965s" podCreationTimestamp="2026-01-21 17:36:21 +0000 UTC" firstStartedPulling="2026-01-21 17:36:50.62282622 +0000 UTC m=+1211.548957080" lastFinishedPulling="2026-01-21 17:36:56.387036198 +0000 UTC m=+1217.313167058" observedRunningTime="2026-01-21 17:37:00.679566669 +0000 UTC m=+1221.605697529" watchObservedRunningTime="2026-01-21 17:37:00.728789965 +0000 UTC m=+1221.654920825" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.735226 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6dc7996bf8-rhk6q"] Jan 21 17:37:00 crc kubenswrapper[4823]: E0121 17:37:00.735728 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8b53cb-9154-460c-90ff-72c9987cd31c" containerName="keystone-bootstrap" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.735748 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8b53cb-9154-460c-90ff-72c9987cd31c" containerName="keystone-bootstrap" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.736012 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8b53cb-9154-460c-90ff-72c9987cd31c" containerName="keystone-bootstrap" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.736802 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.739690 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.740397 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.740522 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.740665 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jlj99" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.743284 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.752452 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.766009 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6dc7996bf8-rhk6q"] Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.775510 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-959c75cfc-2zd2j" podStartSLOduration=5.775480369 podStartE2EDuration="5.775480369s" podCreationTimestamp="2026-01-21 17:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:00.729473722 +0000 UTC m=+1221.655604582" watchObservedRunningTime="2026-01-21 17:37:00.775480369 +0000 UTC m=+1221.701611229" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.792001 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54965ff674-mlg85" podStartSLOduration=7.791970276 podStartE2EDuration="7.791970276s" podCreationTimestamp="2026-01-21 17:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:00.771956072 +0000 UTC m=+1221.698086932" watchObservedRunningTime="2026-01-21 17:37:00.791970276 +0000 UTC m=+1221.718101136" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.813572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-public-tls-certs\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.813662 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-config-data\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.813693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-fernet-keys\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.813733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-credential-keys\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.813758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-scripts\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.813818 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-internal-tls-certs\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.814256 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s26l\" (UniqueName: \"kubernetes.io/projected/296d5316-1483-48f5-98f9-3b0ca03c4268-kube-api-access-6s26l\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.814291 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-combined-ca-bundle\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.916565 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-internal-tls-certs\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s26l\" (UniqueName: \"kubernetes.io/projected/296d5316-1483-48f5-98f9-3b0ca03c4268-kube-api-access-6s26l\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918479 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-combined-ca-bundle\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918549 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-public-tls-certs\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918687 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-config-data\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-fernet-keys\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-credential-keys\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.918907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-scripts\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.936053 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-internal-tls-certs\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.952526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-fernet-keys\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.955264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s26l\" (UniqueName: \"kubernetes.io/projected/296d5316-1483-48f5-98f9-3b0ca03c4268-kube-api-access-6s26l\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.957135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-scripts\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.968564 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-public-tls-certs\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.969788 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-config-data\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:00 crc kubenswrapper[4823]: I0121 17:37:00.971674 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-combined-ca-bundle\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.021262 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.021800 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.028324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/296d5316-1483-48f5-98f9-3b0ca03c4268-credential-keys\") pod \"keystone-6dc7996bf8-rhk6q\" (UID: \"296d5316-1483-48f5-98f9-3b0ca03c4268\") " pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.137256 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.166993 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.641727 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.776409 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6dc7996bf8-rhk6q"] Jan 21 17:37:01 crc kubenswrapper[4823]: I0121 17:37:01.863835 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 17:37:02 crc kubenswrapper[4823]: I0121 17:37:02.251700 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:37:02 crc kubenswrapper[4823]: I0121 17:37:02.251781 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:37:02 crc kubenswrapper[4823]: I0121 17:37:02.309253 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:37:02 crc kubenswrapper[4823]: I0121 17:37:02.310499 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:37:02 crc kubenswrapper[4823]: I0121 17:37:02.670807 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6dc7996bf8-rhk6q" event={"ID":"296d5316-1483-48f5-98f9-3b0ca03c4268","Type":"ContainerStarted","Data":"17a8d3c45de900315953034dac54049f795f7e0bc1b7a7159090e9a549f0ee43"} Jan 21 17:37:04 crc kubenswrapper[4823]: I0121 17:37:04.181906 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 17:37:04 crc kubenswrapper[4823]: I0121 17:37:04.234458 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 17:37:05 crc kubenswrapper[4823]: I0121 17:37:05.705305 4823 generic.go:334] "Generic (PLEG): container finished" podID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerID="8fea9c777a706a8c76799da43129933f5e7c503735e994e92b38106673cd1d99" exitCode=0 Jan 21 17:37:05 crc kubenswrapper[4823]: I0121 17:37:05.705488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerDied","Data":"8fea9c777a706a8c76799da43129933f5e7c503735e994e92b38106673cd1d99"} Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.021920 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.033356 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.060220 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.088648 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.167810 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.202255 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.717162 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.727067 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.754073 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 17:37:06 crc kubenswrapper[4823]: I0121 17:37:06.766433 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.719768 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.720552 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.767241 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.775581 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.775989 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.783136 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.856628 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-srv5l"] Jan 21 17:37:08 crc kubenswrapper[4823]: I0121 17:37:08.857038 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" containerID="cri-o://8b5252e3c9d84c533990fe68e839f9a7db254971e297bc194e33ce04c493e486" gracePeriod=10 Jan 21 17:37:09 crc kubenswrapper[4823]: I0121 17:37:09.729277 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:37:09 crc kubenswrapper[4823]: I0121 17:37:09.730148 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" containerID="cri-o://4441f1faddda2fddccf7bb37ae303b7492b3fbef94e8f0ce2a1106ac3f8f4433" gracePeriod=30 Jan 21 17:37:09 crc kubenswrapper[4823]: I0121 17:37:09.731153 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" containerID="cri-o://a0e97968cdb435dc48bc4654e6d326cf446ba68f8236748e2e495dc689d4075f" gracePeriod=30 Jan 21 17:37:09 crc kubenswrapper[4823]: I0121 17:37:09.754467 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:10 crc kubenswrapper[4823]: I0121 17:37:10.765304 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:37:11 crc kubenswrapper[4823]: I0121 17:37:11.696905 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:11 crc kubenswrapper[4823]: I0121 17:37:11.772574 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:37:11 crc kubenswrapper[4823]: I0121 17:37:11.927613 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 21 17:37:12 crc kubenswrapper[4823]: I0121 17:37:12.092527 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 17:37:12 crc kubenswrapper[4823]: I0121 17:37:12.254524 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 21 17:37:12 crc kubenswrapper[4823]: I0121 17:37:12.311506 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bddd7dd6-67b6t" podUID="709299a1-f499-447b-a738-efe1b32c7abf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 17:37:12 crc kubenswrapper[4823]: I0121 17:37:12.940322 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:57874->10.217.0.161:9322: read: connection reset by peer" Jan 21 17:37:12 crc kubenswrapper[4823]: I0121 17:37:12.940314 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:57868->10.217.0.161:9322: read: connection reset by peer" Jan 21 17:37:14 crc kubenswrapper[4823]: I0121 17:37:14.804093 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3254a75-e9bf-4947-b472-c4d824599c49" containerID="8b5252e3c9d84c533990fe68e839f9a7db254971e297bc194e33ce04c493e486" exitCode=0 Jan 21 17:37:14 crc kubenswrapper[4823]: I0121 17:37:14.804643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" event={"ID":"f3254a75-e9bf-4947-b472-c4d824599c49","Type":"ContainerDied","Data":"8b5252e3c9d84c533990fe68e839f9a7db254971e297bc194e33ce04c493e486"} Jan 21 17:37:14 crc kubenswrapper[4823]: I0121 17:37:14.807345 4823 generic.go:334] "Generic (PLEG): container finished" podID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerID="4441f1faddda2fddccf7bb37ae303b7492b3fbef94e8f0ce2a1106ac3f8f4433" exitCode=143 Jan 21 17:37:14 crc kubenswrapper[4823]: I0121 17:37:14.807379 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d6cbd234-b542-49e1-bd22-bb4b307b2f7f","Type":"ContainerDied","Data":"4441f1faddda2fddccf7bb37ae303b7492b3fbef94e8f0ce2a1106ac3f8f4433"} Jan 21 17:37:16 crc kubenswrapper[4823]: I0121 17:37:16.021966 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": dial tcp 10.217.0.161:9322: connect: connection refused" Jan 21 17:37:16 crc kubenswrapper[4823]: I0121 17:37:16.024189 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": dial tcp 10.217.0.161:9322: connect: connection refused" Jan 21 17:37:16 crc kubenswrapper[4823]: I0121 17:37:16.927255 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 21 17:37:17 crc kubenswrapper[4823]: I0121 17:37:17.837908 4823 generic.go:334] "Generic (PLEG): container finished" podID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerID="a0e97968cdb435dc48bc4654e6d326cf446ba68f8236748e2e495dc689d4075f" exitCode=0 Jan 21 17:37:17 crc kubenswrapper[4823]: I0121 17:37:17.837977 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d6cbd234-b542-49e1-bd22-bb4b307b2f7f","Type":"ContainerDied","Data":"a0e97968cdb435dc48bc4654e6d326cf446ba68f8236748e2e495dc689d4075f"} Jan 21 17:37:18 crc kubenswrapper[4823]: I0121 17:37:18.853181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6dc7996bf8-rhk6q" event={"ID":"296d5316-1483-48f5-98f9-3b0ca03c4268","Type":"ContainerStarted","Data":"ca0d4496ca9dfb7480b8a22c831378ddacffbb3ce50116c98d283609b905007a"} Jan 21 17:37:18 crc kubenswrapper[4823]: I0121 17:37:18.853501 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:18 crc kubenswrapper[4823]: I0121 17:37:18.894325 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6dc7996bf8-rhk6q" podStartSLOduration=18.894293632 podStartE2EDuration="18.894293632s" podCreationTimestamp="2026-01-21 17:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:18.888449548 +0000 UTC m=+1239.814580438" watchObservedRunningTime="2026-01-21 17:37:18.894293632 +0000 UTC m=+1239.820424512" Jan 21 17:37:22 crc kubenswrapper[4823]: I0121 17:37:22.252658 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 21 17:37:22 crc kubenswrapper[4823]: I0121 17:37:22.310128 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bddd7dd6-67b6t" podUID="709299a1-f499-447b-a738-efe1b32c7abf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 17:37:23 crc kubenswrapper[4823]: I0121 17:37:23.910445 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-54965ff674-mlg85" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 17:37:23 crc kubenswrapper[4823]: I0121 17:37:23.910486 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54965ff674-mlg85" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 17:37:23 crc kubenswrapper[4823]: I0121 17:37:23.911702 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-54965ff674-mlg85" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.717966 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.727725 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.912842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" event={"ID":"f3254a75-e9bf-4947-b472-c4d824599c49","Type":"ContainerDied","Data":"febc47bda305dd147fa805d6e62372619915cbf0d0ef9378f49a93ad8807138c"} Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.912921 4823 scope.go:117] "RemoveContainer" containerID="8b5252e3c9d84c533990fe68e839f9a7db254971e297bc194e33ce04c493e486" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.913040 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914067 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-svc\") pod \"f3254a75-e9bf-4947-b472-c4d824599c49\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914113 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-nb\") pod \"f3254a75-e9bf-4947-b472-c4d824599c49\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914190 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jptlf\" (UniqueName: \"kubernetes.io/projected/f3254a75-e9bf-4947-b472-c4d824599c49-kube-api-access-jptlf\") pod \"f3254a75-e9bf-4947-b472-c4d824599c49\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914215 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-custom-prometheus-ca\") pod \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914256 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-combined-ca-bundle\") pod \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914276 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-logs\") pod \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914352 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-swift-storage-0\") pod \"f3254a75-e9bf-4947-b472-c4d824599c49\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914384 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-config\") pod \"f3254a75-e9bf-4947-b472-c4d824599c49\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914438 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-sb\") pod \"f3254a75-e9bf-4947-b472-c4d824599c49\" (UID: \"f3254a75-e9bf-4947-b472-c4d824599c49\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914499 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-config-data\") pod \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.914528 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2vlb\" (UniqueName: \"kubernetes.io/projected/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-kube-api-access-f2vlb\") pod \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\" (UID: \"d6cbd234-b542-49e1-bd22-bb4b307b2f7f\") " Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.916228 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-logs" (OuterVolumeSpecName: "logs") pod "d6cbd234-b542-49e1-bd22-bb4b307b2f7f" (UID: "d6cbd234-b542-49e1-bd22-bb4b307b2f7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.922464 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d6cbd234-b542-49e1-bd22-bb4b307b2f7f","Type":"ContainerDied","Data":"2d286be18c8e8022776e3850af671a3373d1731b393233d3a4d5bb5a580a2772"} Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.922583 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.928968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-kube-api-access-f2vlb" (OuterVolumeSpecName: "kube-api-access-f2vlb") pod "d6cbd234-b542-49e1-bd22-bb4b307b2f7f" (UID: "d6cbd234-b542-49e1-bd22-bb4b307b2f7f"). InnerVolumeSpecName "kube-api-access-f2vlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.951038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3254a75-e9bf-4947-b472-c4d824599c49-kube-api-access-jptlf" (OuterVolumeSpecName: "kube-api-access-jptlf") pod "f3254a75-e9bf-4947-b472-c4d824599c49" (UID: "f3254a75-e9bf-4947-b472-c4d824599c49"). InnerVolumeSpecName "kube-api-access-jptlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.973372 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d6cbd234-b542-49e1-bd22-bb4b307b2f7f" (UID: "d6cbd234-b542-49e1-bd22-bb4b307b2f7f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.983461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-config-data" (OuterVolumeSpecName: "config-data") pod "d6cbd234-b542-49e1-bd22-bb4b307b2f7f" (UID: "d6cbd234-b542-49e1-bd22-bb4b307b2f7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.989527 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3254a75-e9bf-4947-b472-c4d824599c49" (UID: "f3254a75-e9bf-4947-b472-c4d824599c49"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.990128 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3254a75-e9bf-4947-b472-c4d824599c49" (UID: "f3254a75-e9bf-4947-b472-c4d824599c49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:37:24 crc kubenswrapper[4823]: I0121 17:37:24.996484 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-config" (OuterVolumeSpecName: "config") pod "f3254a75-e9bf-4947-b472-c4d824599c49" (UID: "f3254a75-e9bf-4947-b472-c4d824599c49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.005068 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6cbd234-b542-49e1-bd22-bb4b307b2f7f" (UID: "d6cbd234-b542-49e1-bd22-bb4b307b2f7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.008968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3254a75-e9bf-4947-b472-c4d824599c49" (UID: "f3254a75-e9bf-4947-b472-c4d824599c49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016517 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016572 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jptlf\" (UniqueName: \"kubernetes.io/projected/f3254a75-e9bf-4947-b472-c4d824599c49-kube-api-access-jptlf\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016588 4823 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016599 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016612 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016626 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016637 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016648 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016659 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.016670 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2vlb\" (UniqueName: \"kubernetes.io/projected/d6cbd234-b542-49e1-bd22-bb4b307b2f7f-kube-api-access-f2vlb\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.021633 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3254a75-e9bf-4947-b472-c4d824599c49" (UID: "f3254a75-e9bf-4947-b472-c4d824599c49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.122329 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3254a75-e9bf-4947-b472-c4d824599c49-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.130041 4823 scope.go:117] "RemoveContainer" containerID="94bc527df58640219a854f5d371c4bfcf1e412934dea61faf7569c7fad343893" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.163760 4823 scope.go:117] "RemoveContainer" containerID="a0e97968cdb435dc48bc4654e6d326cf446ba68f8236748e2e495dc689d4075f" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.199391 4823 scope.go:117] "RemoveContainer" containerID="4441f1faddda2fddccf7bb37ae303b7492b3fbef94e8f0ce2a1106ac3f8f4433" Jan 21 17:37:25 crc kubenswrapper[4823]: E0121 17:37:25.253656 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 21 17:37:25 crc kubenswrapper[4823]: E0121 17:37:25.253817 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsjgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6697a997-d4df-46c4-8520-8d23c6203f87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.259298 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-srv5l"] Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.272198 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-srv5l"] Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.285228 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.302492 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316142 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:37:25 crc kubenswrapper[4823]: E0121 17:37:25.316622 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316648 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" Jan 21 17:37:25 crc kubenswrapper[4823]: E0121 17:37:25.316670 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316680 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" Jan 21 17:37:25 crc kubenswrapper[4823]: E0121 17:37:25.316709 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316717 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" Jan 21 17:37:25 crc kubenswrapper[4823]: E0121 17:37:25.316738 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="init" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316746 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="init" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316968 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.316998 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.317023 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.318278 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.322999 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.323118 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.323244 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.327371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87b074e0-8609-4e85-a0df-ce3376e7b7df-logs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.327420 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.327999 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-config-data\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.358135 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" path="/var/lib/kubelet/pods/d6cbd234-b542-49e1-bd22-bb4b307b2f7f/volumes" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.358717 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" path="/var/lib/kubelet/pods/f3254a75-e9bf-4947-b472-c4d824599c49/volumes" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430592 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430660 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430698 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87b074e0-8609-4e85-a0df-ce3376e7b7df-logs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430757 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg7rp\" (UniqueName: \"kubernetes.io/projected/87b074e0-8609-4e85-a0df-ce3376e7b7df-kube-api-access-zg7rp\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430901 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-public-tls-certs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.430934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-config-data\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.431256 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87b074e0-8609-4e85-a0df-ce3376e7b7df-logs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.435766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-config-data\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.532316 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.532392 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.532449 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.532503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg7rp\" (UniqueName: \"kubernetes.io/projected/87b074e0-8609-4e85-a0df-ce3376e7b7df-kube-api-access-zg7rp\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.532598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-public-tls-certs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.536209 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-public-tls-certs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.537143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.537571 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.538227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b074e0-8609-4e85-a0df-ce3376e7b7df-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.561173 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg7rp\" (UniqueName: \"kubernetes.io/projected/87b074e0-8609-4e85-a0df-ce3376e7b7df-kube-api-access-zg7rp\") pod \"watcher-api-0\" (UID: \"87b074e0-8609-4e85-a0df-ce3376e7b7df\") " pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.643354 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 17:37:25 crc kubenswrapper[4823]: I0121 17:37:25.933358 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vh8wn" event={"ID":"c29754ad-e324-474f-a0df-d450b9152aa3","Type":"ContainerStarted","Data":"0e25ba130262bf3981890d0157534915dbdd7f5f6d73bf4de6aa3f47178c655f"} Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.020626 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.020719 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d6cbd234-b542-49e1-bd22-bb4b307b2f7f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.223130 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-959c75cfc-2zd2j" podUID="250ad919-e0b5-4ae5-8f77-7631bb71aba0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.223456 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-959c75cfc-2zd2j" podUID="250ad919-e0b5-4ae5-8f77-7631bb71aba0" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.223274 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-959c75cfc-2zd2j" podUID="250ad919-e0b5-4ae5-8f77-7631bb71aba0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 17:37:26 crc kubenswrapper[4823]: W0121 17:37:26.441600 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b074e0_8609_4e85_a0df_ce3376e7b7df.slice/crio-fed0dff198e09cf4f0621c4cd05129247bd1fd398d2f975efbc968d7c3710428 WatchSource:0}: Error finding container fed0dff198e09cf4f0621c4cd05129247bd1fd398d2f975efbc968d7c3710428: Status 404 returned error can't find the container with id fed0dff198e09cf4f0621c4cd05129247bd1fd398d2f975efbc968d7c3710428 Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.443776 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.927009 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-srv5l" podUID="f3254a75-e9bf-4947-b472-c4d824599c49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: i/o timeout" Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.947181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"87b074e0-8609-4e85-a0df-ce3376e7b7df","Type":"ContainerStarted","Data":"141b108be80940e58263bedb6a00c66038f4df1a9d8d081c679afcc439aea34b"} Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.947233 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"87b074e0-8609-4e85-a0df-ce3376e7b7df","Type":"ContainerStarted","Data":"fed0dff198e09cf4f0621c4cd05129247bd1fd398d2f975efbc968d7c3710428"} Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.951347 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerStarted","Data":"043e516cedffa753f7f731b672f310e54f123c06f9867079326d1b3e5e969586"} Jan 21 17:37:26 crc kubenswrapper[4823]: I0121 17:37:26.972745 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vh8wn" podStartSLOduration=5.428646918 podStartE2EDuration="1m18.972723389s" podCreationTimestamp="2026-01-21 17:36:08 +0000 UTC" firstStartedPulling="2026-01-21 17:36:11.388042116 +0000 UTC m=+1172.314172976" lastFinishedPulling="2026-01-21 17:37:24.932118577 +0000 UTC m=+1245.858249447" observedRunningTime="2026-01-21 17:37:26.972325549 +0000 UTC m=+1247.898456449" watchObservedRunningTime="2026-01-21 17:37:26.972723389 +0000 UTC m=+1247.898854249" Jan 21 17:37:27 crc kubenswrapper[4823]: I0121 17:37:27.961617 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"87b074e0-8609-4e85-a0df-ce3376e7b7df","Type":"ContainerStarted","Data":"1adba3771628da2feca4386ccbd8c866b74755a1c067ff6e498151a5e0845d35"} Jan 21 17:37:27 crc kubenswrapper[4823]: I0121 17:37:27.961972 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 17:37:27 crc kubenswrapper[4823]: I0121 17:37:27.997795 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.997764673 podStartE2EDuration="2.997764673s" podCreationTimestamp="2026-01-21 17:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:27.991935049 +0000 UTC m=+1248.918065969" watchObservedRunningTime="2026-01-21 17:37:27.997764673 +0000 UTC m=+1248.923895573" Jan 21 17:37:30 crc kubenswrapper[4823]: I0121 17:37:30.644448 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 17:37:30 crc kubenswrapper[4823]: I0121 17:37:30.645407 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:37:30 crc kubenswrapper[4823]: I0121 17:37:30.681540 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 17:37:31 crc kubenswrapper[4823]: I0121 17:37:31.017411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fr4qp" event={"ID":"2d118987-76ea-46aa-9989-274e87e36d3a","Type":"ContainerStarted","Data":"fd63d187867d12a93605a1f217ca160c7f7ac70b145f5fd1fa052e27f01c12c9"} Jan 21 17:37:31 crc kubenswrapper[4823]: I0121 17:37:31.035440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerStarted","Data":"2f21487b13c87d48f26faeffe5c3794dcaa2e08c64a87b5b17f685d0aea18e90"} Jan 21 17:37:32 crc kubenswrapper[4823]: I0121 17:37:32.065233 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerStarted","Data":"3e2743663b971ddf944026dc05c9b4aa6425058354d28f36254dd7c7d0d23832"} Jan 21 17:37:32 crc kubenswrapper[4823]: I0121 17:37:32.102089 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fr4qp" podStartSLOduration=6.157172853 podStartE2EDuration="1m24.101995196s" podCreationTimestamp="2026-01-21 17:36:08 +0000 UTC" firstStartedPulling="2026-01-21 17:36:11.32223169 +0000 UTC m=+1172.248362550" lastFinishedPulling="2026-01-21 17:37:29.267054033 +0000 UTC m=+1250.193184893" observedRunningTime="2026-01-21 17:37:32.097875944 +0000 UTC m=+1253.024006804" watchObservedRunningTime="2026-01-21 17:37:32.101995196 +0000 UTC m=+1253.028126056" Jan 21 17:37:33 crc kubenswrapper[4823]: I0121 17:37:33.435096 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6dc7996bf8-rhk6q" Jan 21 17:37:34 crc kubenswrapper[4823]: I0121 17:37:34.126778 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=46.126753477 podStartE2EDuration="46.126753477s" podCreationTimestamp="2026-01-21 17:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:34.114407023 +0000 UTC m=+1255.040537903" watchObservedRunningTime="2026-01-21 17:37:34.126753477 +0000 UTC m=+1255.052884337" Jan 21 17:37:34 crc kubenswrapper[4823]: I0121 17:37:34.154913 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:37:34 crc kubenswrapper[4823]: I0121 17:37:34.579249 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:37:34 crc kubenswrapper[4823]: I0121 17:37:34.699512 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 17:37:34 crc kubenswrapper[4823]: I0121 17:37:34.699995 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 17:37:34 crc kubenswrapper[4823]: I0121 17:37:34.705285 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.105019 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.367741 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.370332 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.375533 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.377049 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.378399 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.379219 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-m9bb9" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.446157 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.446704 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c77\" (UniqueName: \"kubernetes.io/projected/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-kube-api-access-65c77\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.446778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.446987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.549089 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65c77\" (UniqueName: \"kubernetes.io/projected/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-kube-api-access-65c77\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.549396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.549493 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.550301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.550973 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.558552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.559284 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.566252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65c77\" (UniqueName: \"kubernetes.io/projected/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-kube-api-access-65c77\") pod \"openstackclient\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.649638 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.679004 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.680179 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.699979 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.709945 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.790638 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.792326 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.809796 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.859224 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/059733a2-b933-471e-b40b-3618874187a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.859676 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/059733a2-b933-471e-b40b-3618874187a0-openstack-config\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.859730 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4927\" (UniqueName: \"kubernetes.io/projected/059733a2-b933-471e-b40b-3618874187a0-kube-api-access-k4927\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.859891 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059733a2-b933-471e-b40b-3618874187a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: E0121 17:37:35.918696 4823 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 17:37:35 crc kubenswrapper[4823]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ee4a8cef-cb2a-41ab-82f6-eb2f6e985618_0(600eb1d236e57791a2ba5f10817e6577cb0ccfe0b520578d5d822d94543f75e3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"600eb1d236e57791a2ba5f10817e6577cb0ccfe0b520578d5d822d94543f75e3" Netns:"/var/run/netns/3228d1b3-871a-4ec1-a1d4-f62c31a28140" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=600eb1d236e57791a2ba5f10817e6577cb0ccfe0b520578d5d822d94543f75e3;K8S_POD_UID=ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618]: expected pod UID "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" but got "059733a2-b933-471e-b40b-3618874187a0" from Kube API Jan 21 17:37:35 crc kubenswrapper[4823]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 17:37:35 crc kubenswrapper[4823]: > Jan 21 17:37:35 crc kubenswrapper[4823]: E0121 17:37:35.918830 4823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 17:37:35 crc kubenswrapper[4823]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ee4a8cef-cb2a-41ab-82f6-eb2f6e985618_0(600eb1d236e57791a2ba5f10817e6577cb0ccfe0b520578d5d822d94543f75e3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"600eb1d236e57791a2ba5f10817e6577cb0ccfe0b520578d5d822d94543f75e3" Netns:"/var/run/netns/3228d1b3-871a-4ec1-a1d4-f62c31a28140" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=600eb1d236e57791a2ba5f10817e6577cb0ccfe0b520578d5d822d94543f75e3;K8S_POD_UID=ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618]: expected pod UID "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" but got "059733a2-b933-471e-b40b-3618874187a0" from Kube API Jan 21 17:37:35 crc kubenswrapper[4823]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 17:37:35 crc kubenswrapper[4823]: > pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.962062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059733a2-b933-471e-b40b-3618874187a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.962158 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/059733a2-b933-471e-b40b-3618874187a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.962311 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/059733a2-b933-471e-b40b-3618874187a0-openstack-config\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.962339 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4927\" (UniqueName: \"kubernetes.io/projected/059733a2-b933-471e-b40b-3618874187a0-kube-api-access-k4927\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.963359 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/059733a2-b933-471e-b40b-3618874187a0-openstack-config\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.966043 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059733a2-b933-471e-b40b-3618874187a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.966952 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/059733a2-b933-471e-b40b-3618874187a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:35 crc kubenswrapper[4823]: I0121 17:37:35.986548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4927\" (UniqueName: \"kubernetes.io/projected/059733a2-b933-471e-b40b-3618874187a0-kube-api-access-k4927\") pod \"openstackclient\" (UID: \"059733a2-b933-471e-b40b-3618874187a0\") " pod="openstack/openstackclient" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.109512 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.120380 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" podUID="059733a2-b933-471e-b40b-3618874187a0" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.122303 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.124751 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.158543 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.267787 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65c77\" (UniqueName: \"kubernetes.io/projected/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-kube-api-access-65c77\") pod \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.267898 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config-secret\") pod \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.267956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config\") pod \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.268128 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-combined-ca-bundle\") pod \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\" (UID: \"ee4a8cef-cb2a-41ab-82f6-eb2f6e985618\") " Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.270110 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" (UID: "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.274153 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-kube-api-access-65c77" (OuterVolumeSpecName: "kube-api-access-65c77") pod "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" (UID: "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618"). InnerVolumeSpecName "kube-api-access-65c77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.274401 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" (UID: "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.275927 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" (UID: "ee4a8cef-cb2a-41ab-82f6-eb2f6e985618"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.371628 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.371701 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65c77\" (UniqueName: \"kubernetes.io/projected/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-kube-api-access-65c77\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.371740 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.371760 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.436121 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.582690 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66bddd7dd6-67b6t" Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.644952 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64bdb4885b-ddprk"] Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.725598 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 17:37:36 crc kubenswrapper[4823]: I0121 17:37:36.757064 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:37:37 crc kubenswrapper[4823]: I0121 17:37:37.119330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"059733a2-b933-471e-b40b-3618874187a0","Type":"ContainerStarted","Data":"f926555f3300de54ef22ccaef600a5a27586b40b2ac566ddebc44883121070ae"} Jan 21 17:37:37 crc kubenswrapper[4823]: I0121 17:37:37.119456 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:37:37 crc kubenswrapper[4823]: I0121 17:37:37.119559 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" containerID="cri-o://4e46d9e6739ac4d11e1c17a1a18eb7088ea089e47013703887983e60a3eea237" gracePeriod=30 Jan 21 17:37:37 crc kubenswrapper[4823]: I0121 17:37:37.120260 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon-log" containerID="cri-o://9bdbd8dbcff4be15445e291de88aae0aaa2bf29fecc871dc52d140cbd8e40c78" gracePeriod=30 Jan 21 17:37:37 crc kubenswrapper[4823]: I0121 17:37:37.133570 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" podUID="059733a2-b933-471e-b40b-3618874187a0" Jan 21 17:37:37 crc kubenswrapper[4823]: I0121 17:37:37.356765 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4a8cef-cb2a-41ab-82f6-eb2f6e985618" path="/var/lib/kubelet/pods/ee4a8cef-cb2a-41ab-82f6-eb2f6e985618/volumes" Jan 21 17:37:41 crc kubenswrapper[4823]: I0121 17:37:41.158214 4823 generic.go:334] "Generic (PLEG): container finished" podID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerID="4e46d9e6739ac4d11e1c17a1a18eb7088ea089e47013703887983e60a3eea237" exitCode=0 Jan 21 17:37:41 crc kubenswrapper[4823]: I0121 17:37:41.158312 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64bdb4885b-ddprk" event={"ID":"7890c9eb-67a6-4c41-af5b-c57f0fddc533","Type":"ContainerDied","Data":"4e46d9e6739ac4d11e1c17a1a18eb7088ea089e47013703887983e60a3eea237"} Jan 21 17:37:42 crc kubenswrapper[4823]: I0121 17:37:42.252726 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 21 17:37:43 crc kubenswrapper[4823]: I0121 17:37:43.181457 4823 generic.go:334] "Generic (PLEG): container finished" podID="a7924f2b-6db5-4473-ae49-91c0d32fa817" containerID="7450e92d8570cca61abdb5f2fdf552a81b8e93450b302712d25c883f417f62db" exitCode=0 Jan 21 17:37:43 crc kubenswrapper[4823]: I0121 17:37:43.181519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qnhv5" event={"ID":"a7924f2b-6db5-4473-ae49-91c0d32fa817","Type":"ContainerDied","Data":"7450e92d8570cca61abdb5f2fdf552a81b8e93450b302712d25c883f417f62db"} Jan 21 17:37:47 crc kubenswrapper[4823]: I0121 17:37:47.235798 4823 generic.go:334] "Generic (PLEG): container finished" podID="c29754ad-e324-474f-a0df-d450b9152aa3" containerID="0e25ba130262bf3981890d0157534915dbdd7f5f6d73bf4de6aa3f47178c655f" exitCode=0 Jan 21 17:37:47 crc kubenswrapper[4823]: I0121 17:37:47.235890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vh8wn" event={"ID":"c29754ad-e324-474f-a0df-d450b9152aa3","Type":"ContainerDied","Data":"0e25ba130262bf3981890d0157534915dbdd7f5f6d73bf4de6aa3f47178c655f"} Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.425099 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76d5f7bd8c-dgmn6"] Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.427276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.429721 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.429731 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.429923 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.439784 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76d5f7bd8c-dgmn6"] Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.508950 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-combined-ca-bundle\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509127 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-internal-tls-certs\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509175 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcdd\" (UniqueName: \"kubernetes.io/projected/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-kube-api-access-stcdd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509368 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-config-data\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-log-httpd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509521 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-public-tls-certs\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-run-httpd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.509717 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-etc-swift\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.611743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-combined-ca-bundle\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.611875 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-internal-tls-certs\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.611907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcdd\" (UniqueName: \"kubernetes.io/projected/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-kube-api-access-stcdd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.611968 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-config-data\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.611998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-log-httpd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.612030 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-public-tls-certs\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.612073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-run-httpd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.612145 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-etc-swift\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.612743 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-log-httpd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.612977 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-run-httpd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.625240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-etc-swift\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.625529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-combined-ca-bundle\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.625561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-public-tls-certs\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.632570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-internal-tls-certs\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.638476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-config-data\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.645913 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcdd\" (UniqueName: \"kubernetes.io/projected/6b9e2d6c-5e93-426c-8c47-478d9ad360ed-kube-api-access-stcdd\") pod \"swift-proxy-76d5f7bd8c-dgmn6\" (UID: \"6b9e2d6c-5e93-426c-8c47-478d9ad360ed\") " pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:50 crc kubenswrapper[4823]: I0121 17:37:50.753104 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:52 crc kubenswrapper[4823]: I0121 17:37:52.252350 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 21 17:37:54 crc kubenswrapper[4823]: E0121 17:37:54.204460 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 21 17:37:54 crc kubenswrapper[4823]: E0121 17:37:54.205056 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch8ch5f8h587h598hbdh5bfh5c8h58chc4h685h579h5ch5c4h68ch5d7h57fh9bh5bbhd9hb7h58ch669h646h5c6h85h5bdh87h695h678h5bfhf6q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4927,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(059733a2-b933-471e-b40b-3618874187a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 17:37:54 crc kubenswrapper[4823]: E0121 17:37:54.206673 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="059733a2-b933-471e-b40b-3618874187a0" Jan 21 17:37:54 crc kubenswrapper[4823]: E0121 17:37:54.321636 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="059733a2-b933-471e-b40b-3618874187a0" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.035272 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.045462 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qnhv5" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124642 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdqqd\" (UniqueName: \"kubernetes.io/projected/c29754ad-e324-474f-a0df-d450b9152aa3-kube-api-access-mdqqd\") pod \"c29754ad-e324-474f-a0df-d450b9152aa3\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124684 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkpf\" (UniqueName: \"kubernetes.io/projected/a7924f2b-6db5-4473-ae49-91c0d32fa817-kube-api-access-pfkpf\") pod \"a7924f2b-6db5-4473-ae49-91c0d32fa817\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124785 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-combined-ca-bundle\") pod \"c29754ad-e324-474f-a0df-d450b9152aa3\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124826 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-config-data\") pod \"a7924f2b-6db5-4473-ae49-91c0d32fa817\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124872 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-db-sync-config-data\") pod \"c29754ad-e324-474f-a0df-d450b9152aa3\" (UID: \"c29754ad-e324-474f-a0df-d450b9152aa3\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124940 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7924f2b-6db5-4473-ae49-91c0d32fa817-logs\") pod \"a7924f2b-6db5-4473-ae49-91c0d32fa817\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.124985 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-scripts\") pod \"a7924f2b-6db5-4473-ae49-91c0d32fa817\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.125091 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-combined-ca-bundle\") pod \"a7924f2b-6db5-4473-ae49-91c0d32fa817\" (UID: \"a7924f2b-6db5-4473-ae49-91c0d32fa817\") " Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.126689 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7924f2b-6db5-4473-ae49-91c0d32fa817-logs" (OuterVolumeSpecName: "logs") pod "a7924f2b-6db5-4473-ae49-91c0d32fa817" (UID: "a7924f2b-6db5-4473-ae49-91c0d32fa817"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.136345 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7924f2b-6db5-4473-ae49-91c0d32fa817-kube-api-access-pfkpf" (OuterVolumeSpecName: "kube-api-access-pfkpf") pod "a7924f2b-6db5-4473-ae49-91c0d32fa817" (UID: "a7924f2b-6db5-4473-ae49-91c0d32fa817"). InnerVolumeSpecName "kube-api-access-pfkpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.139518 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c29754ad-e324-474f-a0df-d450b9152aa3" (UID: "c29754ad-e324-474f-a0df-d450b9152aa3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.139763 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29754ad-e324-474f-a0df-d450b9152aa3-kube-api-access-mdqqd" (OuterVolumeSpecName: "kube-api-access-mdqqd") pod "c29754ad-e324-474f-a0df-d450b9152aa3" (UID: "c29754ad-e324-474f-a0df-d450b9152aa3"). InnerVolumeSpecName "kube-api-access-mdqqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.139917 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-scripts" (OuterVolumeSpecName: "scripts") pod "a7924f2b-6db5-4473-ae49-91c0d32fa817" (UID: "a7924f2b-6db5-4473-ae49-91c0d32fa817"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.182270 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-config-data" (OuterVolumeSpecName: "config-data") pod "a7924f2b-6db5-4473-ae49-91c0d32fa817" (UID: "a7924f2b-6db5-4473-ae49-91c0d32fa817"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.191122 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7924f2b-6db5-4473-ae49-91c0d32fa817" (UID: "a7924f2b-6db5-4473-ae49-91c0d32fa817"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.225026 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29754ad-e324-474f-a0df-d450b9152aa3" (UID: "c29754ad-e324-474f-a0df-d450b9152aa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.233368 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7924f2b-6db5-4473-ae49-91c0d32fa817-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.233649 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.233741 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.233807 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdqqd\" (UniqueName: \"kubernetes.io/projected/c29754ad-e324-474f-a0df-d450b9152aa3-kube-api-access-mdqqd\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.233878 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkpf\" (UniqueName: \"kubernetes.io/projected/a7924f2b-6db5-4473-ae49-91c0d32fa817-kube-api-access-pfkpf\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.233944 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.234029 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7924f2b-6db5-4473-ae49-91c0d32fa817-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.234083 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29754ad-e324-474f-a0df-d450b9152aa3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.249106 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.347595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vh8wn" event={"ID":"c29754ad-e324-474f-a0df-d450b9152aa3","Type":"ContainerDied","Data":"3a99e0228428acd7c3d1421496895ed122a2cff6f4143a9e3cae2968acedccd7"} Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.347633 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a99e0228428acd7c3d1421496895ed122a2cff6f4143a9e3cae2968acedccd7" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.347684 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vh8wn" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.366340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qnhv5" event={"ID":"a7924f2b-6db5-4473-ae49-91c0d32fa817","Type":"ContainerDied","Data":"bcc362eb1c9fe031b93862c9c1a23ee5164a5f097a3a84f4eff665ee0d0fd94b"} Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.366395 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc362eb1c9fe031b93862c9c1a23ee5164a5f097a3a84f4eff665ee0d0fd94b" Jan 21 17:37:56 crc kubenswrapper[4823]: I0121 17:37:56.367557 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qnhv5" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.341966 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68db9cf4b4-kzfgq"] Jan 21 17:37:57 crc kubenswrapper[4823]: E0121 17:37:57.343545 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29754ad-e324-474f-a0df-d450b9152aa3" containerName="barbican-db-sync" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.343566 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29754ad-e324-474f-a0df-d450b9152aa3" containerName="barbican-db-sync" Jan 21 17:37:57 crc kubenswrapper[4823]: E0121 17:37:57.343583 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7924f2b-6db5-4473-ae49-91c0d32fa817" containerName="placement-db-sync" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.343593 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7924f2b-6db5-4473-ae49-91c0d32fa817" containerName="placement-db-sync" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.343844 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7924f2b-6db5-4473-ae49-91c0d32fa817" containerName="placement-db-sync" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.343898 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29754ad-e324-474f-a0df-d450b9152aa3" containerName="barbican-db-sync" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.360039 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.401360 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.404344 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.405696 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tzvkv" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.473504 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.475836 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.502176 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-scripts\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.502296 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-combined-ca-bundle\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.502411 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-config-data\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.502534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftktr\" (UniqueName: \"kubernetes.io/projected/eba57ea7-deed-4d3e-9327-f2baaf9e920d-kube-api-access-ftktr\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.502782 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-internal-tls-certs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.502948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eba57ea7-deed-4d3e-9327-f2baaf9e920d-logs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.503155 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-public-tls-certs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.549022 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68db9cf4b4-kzfgq"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.549130 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78cfb466c-qccf2"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.551029 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.570812 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78cfb466c-qccf2"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.574637 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.574988 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.575282 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5p2x7" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606195 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-config-data\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606248 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eba57ea7-deed-4d3e-9327-f2baaf9e920d-logs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606321 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-public-tls-certs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606352 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0509a27d-cceb-45b3-9595-b5e5489a3934-logs\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606393 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-scripts\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606440 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-combined-ca-bundle\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606460 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-config-data\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606481 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftktr\" (UniqueName: \"kubernetes.io/projected/eba57ea7-deed-4d3e-9327-f2baaf9e920d-kube-api-access-ftktr\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-config-data-custom\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606535 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-combined-ca-bundle\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.606966 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eba57ea7-deed-4d3e-9327-f2baaf9e920d-logs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.607969 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8hnm\" (UniqueName: \"kubernetes.io/projected/0509a27d-cceb-45b3-9595-b5e5489a3934-kube-api-access-t8hnm\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.608001 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-internal-tls-certs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.620939 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-internal-tls-certs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.623574 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-666c84c45d-q2ttq"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.625771 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.629692 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-config-data\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.629995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-combined-ca-bundle\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.631381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-scripts\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.631672 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.632370 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftktr\" (UniqueName: \"kubernetes.io/projected/eba57ea7-deed-4d3e-9327-f2baaf9e920d-kube-api-access-ftktr\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.633897 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba57ea7-deed-4d3e-9327-f2baaf9e920d-public-tls-certs\") pod \"placement-68db9cf4b4-kzfgq\" (UID: \"eba57ea7-deed-4d3e-9327-f2baaf9e920d\") " pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.646936 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-666c84c45d-q2ttq"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.681462 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z654m"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.684200 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.695243 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z654m"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.711445 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85f89f98cd-9zzvq"] Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.711544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0509a27d-cceb-45b3-9595-b5e5489a3934-logs\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.711628 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-combined-ca-bundle\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.711687 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.711721 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.711829 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-config-data-custom\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712121 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-config-data-custom\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712163 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6npg\" (UniqueName: \"kubernetes.io/projected/04e9985d-1c05-40ec-8cae-ed502baa16f5-kube-api-access-h6npg\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712208 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-combined-ca-bundle\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712236 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c127d68c-e927-419c-a632-2a85db61e595-logs\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0509a27d-cceb-45b3-9595-b5e5489a3934-logs\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712279 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-config\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmvp\" (UniqueName: \"kubernetes.io/projected/c127d68c-e927-419c-a632-2a85db61e595-kube-api-access-9xmvp\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8hnm\" (UniqueName: \"kubernetes.io/projected/0509a27d-cceb-45b3-9595-b5e5489a3934-kube-api-access-t8hnm\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.712947 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-config-data\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.713099 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-config-data\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.713159 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.718397 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-config-data-custom\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.720461 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.724119 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-config-data\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.726592 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.747752 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0509a27d-cceb-45b3-9595-b5e5489a3934-combined-ca-bundle\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.752567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8hnm\" (UniqueName: \"kubernetes.io/projected/0509a27d-cceb-45b3-9595-b5e5489a3934-kube-api-access-t8hnm\") pod \"barbican-worker-78cfb466c-qccf2\" (UID: \"0509a27d-cceb-45b3-9595-b5e5489a3934\") " pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.761525 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85f89f98cd-9zzvq"] Jan 21 17:37:57 crc kubenswrapper[4823]: E0121 17:37:57.780236 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 21 17:37:57 crc kubenswrapper[4823]: E0121 17:37:57.780567 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsjgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6697a997-d4df-46c4-8520-8d23c6203f87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.781361 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:37:57 crc kubenswrapper[4823]: E0121 17:37:57.781666 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="6697a997-d4df-46c4-8520-8d23c6203f87" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.817979 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xmvp\" (UniqueName: \"kubernetes.io/projected/c127d68c-e927-419c-a632-2a85db61e595-kube-api-access-9xmvp\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55j9\" (UniqueName: \"kubernetes.io/projected/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-kube-api-access-r55j9\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818098 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-config-data\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-combined-ca-bundle\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data-custom\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-logs\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-combined-ca-bundle\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818362 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818381 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818405 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-config-data-custom\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.818453 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.820057 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.820114 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.820141 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.820630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6npg\" (UniqueName: \"kubernetes.io/projected/04e9985d-1c05-40ec-8cae-ed502baa16f5-kube-api-access-h6npg\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.820891 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c127d68c-e927-419c-a632-2a85db61e595-logs\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.821047 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-config\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.821218 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.822229 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c127d68c-e927-419c-a632-2a85db61e595-logs\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.824167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-config-data\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.825881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.826098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-config\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.842886 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-config-data-custom\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.856316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xmvp\" (UniqueName: \"kubernetes.io/projected/c127d68c-e927-419c-a632-2a85db61e595-kube-api-access-9xmvp\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.856448 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6npg\" (UniqueName: \"kubernetes.io/projected/04e9985d-1c05-40ec-8cae-ed502baa16f5-kube-api-access-h6npg\") pod \"dnsmasq-dns-848cf88cfc-z654m\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.859879 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c127d68c-e927-419c-a632-2a85db61e595-combined-ca-bundle\") pod \"barbican-keystone-listener-666c84c45d-q2ttq\" (UID: \"c127d68c-e927-419c-a632-2a85db61e595\") " pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.900448 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78cfb466c-qccf2" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.923714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-logs\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.923961 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.924240 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55j9\" (UniqueName: \"kubernetes.io/projected/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-kube-api-access-r55j9\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.924573 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-combined-ca-bundle\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.925475 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data-custom\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.925535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-logs\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.931283 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-959c75cfc-2zd2j" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.931970 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data-custom\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.941260 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.942105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-combined-ca-bundle\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:57 crc kubenswrapper[4823]: I0121 17:37:57.979727 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55j9\" (UniqueName: \"kubernetes.io/projected/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-kube-api-access-r55j9\") pod \"barbican-api-85f89f98cd-9zzvq\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.035166 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54965ff674-mlg85"] Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.035378 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54965ff674-mlg85" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-api" containerID="cri-o://3bf82de43ab7e722fb03c673fca248493493b44727320a86384bc5782d7bf19e" gracePeriod=30 Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.035821 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54965ff674-mlg85" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-httpd" containerID="cri-o://6947b4c028998adce00bf3650a53a2d4cd2d9ac0e92047b3123fddbe9fb01e4e" gracePeriod=30 Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.062287 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76d5f7bd8c-dgmn6"] Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.074336 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.092103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.100389 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.528944 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68db9cf4b4-kzfgq"] Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.605374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68db9cf4b4-kzfgq" event={"ID":"eba57ea7-deed-4d3e-9327-f2baaf9e920d","Type":"ContainerStarted","Data":"3f0fab5ee5f807f6928de86524c48885e75e05c9131e1e15875e5a15675ed759"} Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.612364 4823 generic.go:334] "Generic (PLEG): container finished" podID="42280059-4e27-4de8-ace4-aeb184783f74" containerID="6947b4c028998adce00bf3650a53a2d4cd2d9ac0e92047b3123fddbe9fb01e4e" exitCode=0 Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.612435 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54965ff674-mlg85" event={"ID":"42280059-4e27-4de8-ace4-aeb184783f74","Type":"ContainerDied","Data":"6947b4c028998adce00bf3650a53a2d4cd2d9ac0e92047b3123fddbe9fb01e4e"} Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.615062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" event={"ID":"6b9e2d6c-5e93-426c-8c47-478d9ad360ed","Type":"ContainerStarted","Data":"fd5323d72ccbf1dd14eff26536a8df9a564ad5cee92d3a7efab524af8da7d9a4"} Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.615325 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6697a997-d4df-46c4-8520-8d23c6203f87" containerName="ceilometer-notification-agent" containerID="cri-o://b4eef1af5fcaa8c3d62477f21ed00c840d61028222a3c6a05f4b2bb375bdd081" gracePeriod=30 Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.720332 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78cfb466c-qccf2"] Jan 21 17:37:58 crc kubenswrapper[4823]: W0121 17:37:58.729035 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0509a27d_cceb_45b3_9595_b5e5489a3934.slice/crio-a870a99c0c9c9cb6ec47ced7d05047dec4251bfc2bf85e05df799165f8a38348 WatchSource:0}: Error finding container a870a99c0c9c9cb6ec47ced7d05047dec4251bfc2bf85e05df799165f8a38348: Status 404 returned error can't find the container with id a870a99c0c9c9cb6ec47ced7d05047dec4251bfc2bf85e05df799165f8a38348 Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.784150 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-666c84c45d-q2ttq"] Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.872946 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85f89f98cd-9zzvq"] Jan 21 17:37:58 crc kubenswrapper[4823]: I0121 17:37:58.884235 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z654m"] Jan 21 17:37:58 crc kubenswrapper[4823]: W0121 17:37:58.924160 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84dc38f6_1bcf_46e3_b82b_aa5f056c1b85.slice/crio-8b925c49c78e9a5d3857a4bdc6ad11834c74d5745a8634083ff612c13dd43e4b WatchSource:0}: Error finding container 8b925c49c78e9a5d3857a4bdc6ad11834c74d5745a8634083ff612c13dd43e4b: Status 404 returned error can't find the container with id 8b925c49c78e9a5d3857a4bdc6ad11834c74d5745a8634083ff612c13dd43e4b Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.633032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85f89f98cd-9zzvq" event={"ID":"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85","Type":"ContainerStarted","Data":"ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.635094 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85f89f98cd-9zzvq" event={"ID":"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85","Type":"ContainerStarted","Data":"8b925c49c78e9a5d3857a4bdc6ad11834c74d5745a8634083ff612c13dd43e4b"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.635189 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" event={"ID":"04e9985d-1c05-40ec-8cae-ed502baa16f5","Type":"ContainerStarted","Data":"53d506d064d6c42a2894d2ec6f823a2fc89c43388ae7558ace314ab65511d212"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.635220 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" event={"ID":"04e9985d-1c05-40ec-8cae-ed502baa16f5","Type":"ContainerStarted","Data":"a72d6c9d4f2ed157b89279a5ff010728a72312debc5789b75635d8ed089c613f"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.639442 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78cfb466c-qccf2" event={"ID":"0509a27d-cceb-45b3-9595-b5e5489a3934","Type":"ContainerStarted","Data":"a870a99c0c9c9cb6ec47ced7d05047dec4251bfc2bf85e05df799165f8a38348"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.640670 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" event={"ID":"c127d68c-e927-419c-a632-2a85db61e595","Type":"ContainerStarted","Data":"d48dca4f4914f3126791b79cb94caba2fa00ede53608dd81525f22f78f2344ab"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.644302 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" event={"ID":"6b9e2d6c-5e93-426c-8c47-478d9ad360ed","Type":"ContainerStarted","Data":"8440bb17cc5eb64940ef02ab0682d3563201870f3961618192f4c9a113909727"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.644339 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" event={"ID":"6b9e2d6c-5e93-426c-8c47-478d9ad360ed","Type":"ContainerStarted","Data":"10e20773dc84fe0788ea6812ff7d06089f17fa1b7289ae21cabe3f229fd8fb1c"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.644772 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.644899 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.647594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68db9cf4b4-kzfgq" event={"ID":"eba57ea7-deed-4d3e-9327-f2baaf9e920d","Type":"ContainerStarted","Data":"c2726175781965657772d862d1c088f8ceb088d4ec91698cb278b62d67cce86b"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.647675 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68db9cf4b4-kzfgq" event={"ID":"eba57ea7-deed-4d3e-9327-f2baaf9e920d","Type":"ContainerStarted","Data":"ca586b6444351f1be2bbf108c1f49670ddb2e0e9dc35890ea5cd7c3ffb035011"} Jan 21 17:37:59 crc kubenswrapper[4823]: I0121 17:37:59.695651 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" podStartSLOduration=9.695622137 podStartE2EDuration="9.695622137s" podCreationTimestamp="2026-01-21 17:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:37:59.682153445 +0000 UTC m=+1280.608284305" watchObservedRunningTime="2026-01-21 17:37:59.695622137 +0000 UTC m=+1280.621752997" Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.687298 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85f89f98cd-9zzvq" event={"ID":"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85","Type":"ContainerStarted","Data":"f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d"} Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.688837 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.689116 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.705304 4823 generic.go:334] "Generic (PLEG): container finished" podID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerID="53d506d064d6c42a2894d2ec6f823a2fc89c43388ae7558ace314ab65511d212" exitCode=0 Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.706429 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" event={"ID":"04e9985d-1c05-40ec-8cae-ed502baa16f5","Type":"ContainerDied","Data":"53d506d064d6c42a2894d2ec6f823a2fc89c43388ae7558ace314ab65511d212"} Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.706471 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.713426 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85f89f98cd-9zzvq" podStartSLOduration=3.713411471 podStartE2EDuration="3.713411471s" podCreationTimestamp="2026-01-21 17:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:00.71292633 +0000 UTC m=+1281.639057210" watchObservedRunningTime="2026-01-21 17:38:00.713411471 +0000 UTC m=+1281.639542331" Jan 21 17:38:00 crc kubenswrapper[4823]: I0121 17:38:00.773436 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68db9cf4b4-kzfgq" podStartSLOduration=3.773417472 podStartE2EDuration="3.773417472s" podCreationTimestamp="2026-01-21 17:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:00.769088695 +0000 UTC m=+1281.695219555" watchObservedRunningTime="2026-01-21 17:38:00.773417472 +0000 UTC m=+1281.699548332" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.508667 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c955fd8fb-p27tx"] Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.510590 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.514005 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.514870 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.517417 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c955fd8fb-p27tx"] Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-config-data-custom\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546351 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-internal-tls-certs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546406 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-config-data\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-logs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546481 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-combined-ca-bundle\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-public-tls-certs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.546523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fbrr\" (UniqueName: \"kubernetes.io/projected/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-kube-api-access-6fbrr\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648461 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-logs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-combined-ca-bundle\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648564 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-public-tls-certs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648591 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fbrr\" (UniqueName: \"kubernetes.io/projected/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-kube-api-access-6fbrr\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648655 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-config-data-custom\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648698 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-internal-tls-certs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.648747 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-config-data\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.649883 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-logs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.654374 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-internal-tls-certs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.655047 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-public-tls-certs\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.658636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-combined-ca-bundle\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.658989 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-config-data\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.666874 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-config-data-custom\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.667979 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fbrr\" (UniqueName: \"kubernetes.io/projected/f6a34862-ff4b-4bc6-ba5c-803fdaeb722f-kube-api-access-6fbrr\") pod \"barbican-api-6c955fd8fb-p27tx\" (UID: \"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f\") " pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.714262 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:38:01 crc kubenswrapper[4823]: I0121 17:38:01.834729 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.252578 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64bdb4885b-ddprk" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.252719 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.734355 4823 generic.go:334] "Generic (PLEG): container finished" podID="42280059-4e27-4de8-ace4-aeb184783f74" containerID="3bf82de43ab7e722fb03c673fca248493493b44727320a86384bc5782d7bf19e" exitCode=0 Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.734435 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54965ff674-mlg85" event={"ID":"42280059-4e27-4de8-ace4-aeb184783f74","Type":"ContainerDied","Data":"3bf82de43ab7e722fb03c673fca248493493b44727320a86384bc5782d7bf19e"} Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.739000 4823 generic.go:334] "Generic (PLEG): container finished" podID="2d118987-76ea-46aa-9989-274e87e36d3a" containerID="fd63d187867d12a93605a1f217ca160c7f7ac70b145f5fd1fa052e27f01c12c9" exitCode=0 Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.739096 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fr4qp" event={"ID":"2d118987-76ea-46aa-9989-274e87e36d3a","Type":"ContainerDied","Data":"fd63d187867d12a93605a1f217ca160c7f7ac70b145f5fd1fa052e27f01c12c9"} Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.741598 4823 generic.go:334] "Generic (PLEG): container finished" podID="6697a997-d4df-46c4-8520-8d23c6203f87" containerID="b4eef1af5fcaa8c3d62477f21ed00c840d61028222a3c6a05f4b2bb375bdd081" exitCode=0 Jan 21 17:38:02 crc kubenswrapper[4823]: I0121 17:38:02.742264 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6697a997-d4df-46c4-8520-8d23c6203f87","Type":"ContainerDied","Data":"b4eef1af5fcaa8c3d62477f21ed00c840d61028222a3c6a05f4b2bb375bdd081"} Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.575460 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.595530 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-log-httpd\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.595746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsjgz\" (UniqueName: \"kubernetes.io/projected/6697a997-d4df-46c4-8520-8d23c6203f87-kube-api-access-hsjgz\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.595921 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-run-httpd\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.595969 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-combined-ca-bundle\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.596004 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-sg-core-conf-yaml\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.596063 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-scripts\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.596101 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-config-data\") pod \"6697a997-d4df-46c4-8520-8d23c6203f87\" (UID: \"6697a997-d4df-46c4-8520-8d23c6203f87\") " Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.597052 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.598864 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.605923 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-scripts" (OuterVolumeSpecName: "scripts") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.618272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.624478 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6697a997-d4df-46c4-8520-8d23c6203f87-kube-api-access-hsjgz" (OuterVolumeSpecName: "kube-api-access-hsjgz") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "kube-api-access-hsjgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.641327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-config-data" (OuterVolumeSpecName: "config-data") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.641730 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6697a997-d4df-46c4-8520-8d23c6203f87" (UID: "6697a997-d4df-46c4-8520-8d23c6203f87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.698689 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.698957 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.699056 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.699161 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6697a997-d4df-46c4-8520-8d23c6203f87-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.699235 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.699317 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsjgz\" (UniqueName: \"kubernetes.io/projected/6697a997-d4df-46c4-8520-8d23c6203f87-kube-api-access-hsjgz\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.699391 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6697a997-d4df-46c4-8520-8d23c6203f87-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.756704 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.759007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6697a997-d4df-46c4-8520-8d23c6203f87","Type":"ContainerDied","Data":"592ea870a90482eca89950162e621e2e2b85554b08d6279099607c8c98b520b4"} Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.759098 4823 scope.go:117] "RemoveContainer" containerID="b4eef1af5fcaa8c3d62477f21ed00c840d61028222a3c6a05f4b2bb375bdd081" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.855544 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.888297 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.900575 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:03 crc kubenswrapper[4823]: E0121 17:38:03.901164 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6697a997-d4df-46c4-8520-8d23c6203f87" containerName="ceilometer-notification-agent" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.901214 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6697a997-d4df-46c4-8520-8d23c6203f87" containerName="ceilometer-notification-agent" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.901474 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6697a997-d4df-46c4-8520-8d23c6203f87" containerName="ceilometer-notification-agent" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.904348 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.907500 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.911611 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:38:03 crc kubenswrapper[4823]: I0121 17:38:03.955000 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.005447 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-config-data\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.007870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr6gj\" (UniqueName: \"kubernetes.io/projected/67c2aec1-96fe-498d-9638-7b3fa2347f26-kube-api-access-cr6gj\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.007949 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.008005 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-run-httpd\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.008099 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-scripts\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.008183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-log-httpd\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.008330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111312 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr6gj\" (UniqueName: \"kubernetes.io/projected/67c2aec1-96fe-498d-9638-7b3fa2347f26-kube-api-access-cr6gj\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111389 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111424 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-run-httpd\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111467 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-scripts\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-log-httpd\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111569 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.111607 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-config-data\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.112517 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-run-httpd\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.112618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-log-httpd\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.127907 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-scripts\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.129092 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.130524 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-config-data\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.135006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.138515 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr6gj\" (UniqueName: \"kubernetes.io/projected/67c2aec1-96fe-498d-9638-7b3fa2347f26-kube-api-access-cr6gj\") pod \"ceilometer-0\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.257651 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.271159 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.314296 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d118987-76ea-46aa-9989-274e87e36d3a-etc-machine-id\") pod \"2d118987-76ea-46aa-9989-274e87e36d3a\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.314381 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-config-data\") pod \"2d118987-76ea-46aa-9989-274e87e36d3a\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.314453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gkfv\" (UniqueName: \"kubernetes.io/projected/2d118987-76ea-46aa-9989-274e87e36d3a-kube-api-access-8gkfv\") pod \"2d118987-76ea-46aa-9989-274e87e36d3a\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.314523 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-combined-ca-bundle\") pod \"2d118987-76ea-46aa-9989-274e87e36d3a\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.314544 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-db-sync-config-data\") pod \"2d118987-76ea-46aa-9989-274e87e36d3a\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.314645 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-scripts\") pod \"2d118987-76ea-46aa-9989-274e87e36d3a\" (UID: \"2d118987-76ea-46aa-9989-274e87e36d3a\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.315922 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d118987-76ea-46aa-9989-274e87e36d3a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2d118987-76ea-46aa-9989-274e87e36d3a" (UID: "2d118987-76ea-46aa-9989-274e87e36d3a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.328313 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2d118987-76ea-46aa-9989-274e87e36d3a" (UID: "2d118987-76ea-46aa-9989-274e87e36d3a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.346387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-scripts" (OuterVolumeSpecName: "scripts") pod "2d118987-76ea-46aa-9989-274e87e36d3a" (UID: "2d118987-76ea-46aa-9989-274e87e36d3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.348177 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d118987-76ea-46aa-9989-274e87e36d3a-kube-api-access-8gkfv" (OuterVolumeSpecName: "kube-api-access-8gkfv") pod "2d118987-76ea-46aa-9989-274e87e36d3a" (UID: "2d118987-76ea-46aa-9989-274e87e36d3a"). InnerVolumeSpecName "kube-api-access-8gkfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.416710 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.416751 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.416763 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d118987-76ea-46aa-9989-274e87e36d3a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.416775 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gkfv\" (UniqueName: \"kubernetes.io/projected/2d118987-76ea-46aa-9989-274e87e36d3a-kube-api-access-8gkfv\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.502192 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d118987-76ea-46aa-9989-274e87e36d3a" (UID: "2d118987-76ea-46aa-9989-274e87e36d3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.519524 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.536511 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-config-data" (OuterVolumeSpecName: "config-data") pod "2d118987-76ea-46aa-9989-274e87e36d3a" (UID: "2d118987-76ea-46aa-9989-274e87e36d3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.621691 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d118987-76ea-46aa-9989-274e87e36d3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.628354 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:38:04 crc kubenswrapper[4823]: W0121 17:38:04.723013 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6a34862_ff4b_4bc6_ba5c_803fdaeb722f.slice/crio-555c7e191f52fb3c322c57c737be21fd2326e88b34c9c87c19bcb491e655319b WatchSource:0}: Error finding container 555c7e191f52fb3c322c57c737be21fd2326e88b34c9c87c19bcb491e655319b: Status 404 returned error can't find the container with id 555c7e191f52fb3c322c57c737be21fd2326e88b34c9c87c19bcb491e655319b Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.723782 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-ovndb-tls-certs\") pod \"42280059-4e27-4de8-ace4-aeb184783f74\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.723977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrgjd\" (UniqueName: \"kubernetes.io/projected/42280059-4e27-4de8-ace4-aeb184783f74-kube-api-access-lrgjd\") pod \"42280059-4e27-4de8-ace4-aeb184783f74\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.724115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-config\") pod \"42280059-4e27-4de8-ace4-aeb184783f74\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.724162 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-combined-ca-bundle\") pod \"42280059-4e27-4de8-ace4-aeb184783f74\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.724316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-httpd-config\") pod \"42280059-4e27-4de8-ace4-aeb184783f74\" (UID: \"42280059-4e27-4de8-ace4-aeb184783f74\") " Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.735101 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "42280059-4e27-4de8-ace4-aeb184783f74" (UID: "42280059-4e27-4de8-ace4-aeb184783f74"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.744740 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42280059-4e27-4de8-ace4-aeb184783f74-kube-api-access-lrgjd" (OuterVolumeSpecName: "kube-api-access-lrgjd") pod "42280059-4e27-4de8-ace4-aeb184783f74" (UID: "42280059-4e27-4de8-ace4-aeb184783f74"). InnerVolumeSpecName "kube-api-access-lrgjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.774134 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c955fd8fb-p27tx"] Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.844775 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.845198 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrgjd\" (UniqueName: \"kubernetes.io/projected/42280059-4e27-4de8-ace4-aeb184783f74-kube-api-access-lrgjd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.855270 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78cfb466c-qccf2" event={"ID":"0509a27d-cceb-45b3-9595-b5e5489a3934","Type":"ContainerStarted","Data":"dbcccf29ac050c94e0439ee10255cfb3591c85aa0bf58f994be664dd8267d0bd"} Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.879257 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c955fd8fb-p27tx" event={"ID":"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f","Type":"ContainerStarted","Data":"555c7e191f52fb3c322c57c737be21fd2326e88b34c9c87c19bcb491e655319b"} Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.912603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fr4qp" event={"ID":"2d118987-76ea-46aa-9989-274e87e36d3a","Type":"ContainerDied","Data":"64dfc9ac5847b858b357b0f1ec62aebe1c33d013b23f39ee568f932c5cc050d2"} Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.912646 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64dfc9ac5847b858b357b0f1ec62aebe1c33d013b23f39ee568f932c5cc050d2" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.912717 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fr4qp" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.972436 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-config" (OuterVolumeSpecName: "config") pod "42280059-4e27-4de8-ace4-aeb184783f74" (UID: "42280059-4e27-4de8-ace4-aeb184783f74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.974119 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "42280059-4e27-4de8-ace4-aeb184783f74" (UID: "42280059-4e27-4de8-ace4-aeb184783f74"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.983291 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.997957 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54965ff674-mlg85" Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.998616 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54965ff674-mlg85" event={"ID":"42280059-4e27-4de8-ace4-aeb184783f74","Type":"ContainerDied","Data":"d8606d0d935e6d2068274388930e8a5ffa3f63aa78638970719043dc7b31b08b"} Jan 21 17:38:04 crc kubenswrapper[4823]: I0121 17:38:04.998690 4823 scope.go:117] "RemoveContainer" containerID="6947b4c028998adce00bf3650a53a2d4cd2d9ac0e92047b3123fddbe9fb01e4e" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.021730 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" event={"ID":"04e9985d-1c05-40ec-8cae-ed502baa16f5","Type":"ContainerStarted","Data":"5d8ad805b33b8d24a9a7998b83529872ac233b4433770d1af169eb557abdc753"} Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.022609 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.041779 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" event={"ID":"c127d68c-e927-419c-a632-2a85db61e595","Type":"ContainerStarted","Data":"dbbd50a79cd129d24d9cb04b4574d5187fba05ebb9e358c18f6880554bb8a51b"} Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.064604 4823 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.064640 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.091417 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42280059-4e27-4de8-ace4-aeb184783f74" (UID: "42280059-4e27-4de8-ace4-aeb184783f74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.150895 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" podStartSLOduration=8.150868897 podStartE2EDuration="8.150868897s" podCreationTimestamp="2026-01-21 17:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:05.117440492 +0000 UTC m=+1286.043571352" watchObservedRunningTime="2026-01-21 17:38:05.150868897 +0000 UTC m=+1286.076999747" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.213686 4823 scope.go:117] "RemoveContainer" containerID="3bf82de43ab7e722fb03c673fca248493493b44727320a86384bc5782d7bf19e" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.214054 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42280059-4e27-4de8-ace4-aeb184783f74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.261239 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:05 crc kubenswrapper[4823]: E0121 17:38:05.261791 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-api" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.261808 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-api" Jan 21 17:38:05 crc kubenswrapper[4823]: E0121 17:38:05.261823 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d118987-76ea-46aa-9989-274e87e36d3a" containerName="cinder-db-sync" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.261830 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d118987-76ea-46aa-9989-274e87e36d3a" containerName="cinder-db-sync" Jan 21 17:38:05 crc kubenswrapper[4823]: E0121 17:38:05.261883 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-httpd" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.261892 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-httpd" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.262112 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-api" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.263340 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="42280059-4e27-4de8-ace4-aeb184783f74" containerName="neutron-httpd" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.263373 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d118987-76ea-46aa-9989-274e87e36d3a" containerName="cinder-db-sync" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.271370 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.276737 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.277313 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rh7hd" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.277485 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.277603 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.324710 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.385000 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6697a997-d4df-46c4-8520-8d23c6203f87" path="/var/lib/kubelet/pods/6697a997-d4df-46c4-8520-8d23c6203f87/volumes" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.393925 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z654m"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.393978 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-z9rh9"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.402141 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.421118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.421184 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brknf\" (UniqueName: \"kubernetes.io/projected/f1353050-fc17-4adc-827e-0eb14c17623d-kube-api-access-brknf\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.421222 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.421274 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.421344 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1353050-fc17-4adc-827e-0eb14c17623d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.421433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.430804 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-z9rh9"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.442304 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54965ff674-mlg85"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.454657 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54965ff674-mlg85"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.523784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.523898 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvqk\" (UniqueName: \"kubernetes.io/projected/be8be361-2dd3-4515-83ee-509176ed3eb9-kube-api-access-rlvqk\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.523939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.523960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-config\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524079 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brknf\" (UniqueName: \"kubernetes.io/projected/f1353050-fc17-4adc-827e-0eb14c17623d-kube-api-access-brknf\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524195 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-svc\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524251 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1353050-fc17-4adc-827e-0eb14c17623d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.524383 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1353050-fc17-4adc-827e-0eb14c17623d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.548435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.553913 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brknf\" (UniqueName: \"kubernetes.io/projected/f1353050-fc17-4adc-827e-0eb14c17623d-kube-api-access-brknf\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.558463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.558943 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.559052 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.566190 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.568019 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.574154 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.583610 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.629310 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.629371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvqk\" (UniqueName: \"kubernetes.io/projected/be8be361-2dd3-4515-83ee-509176ed3eb9-kube-api-access-rlvqk\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.629406 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.629429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.629475 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-config\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.629552 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-svc\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.630532 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-svc\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.639512 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.640572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.641740 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-config\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.642621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.644831 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.668761 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvqk\" (UniqueName: \"kubernetes.io/projected/be8be361-2dd3-4515-83ee-509176ed3eb9-kube-api-access-rlvqk\") pod \"dnsmasq-dns-6578955fd5-z9rh9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.728452 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732237 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data-custom\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732331 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-logs\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732364 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732486 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-scripts\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.732503 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkvz\" (UniqueName: \"kubernetes.io/projected/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-kube-api-access-rtkvz\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.803020 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.803716 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data-custom\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834408 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-logs\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834532 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-scripts\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.834586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkvz\" (UniqueName: \"kubernetes.io/projected/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-kube-api-access-rtkvz\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.835746 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-logs\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.836758 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.842399 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.842691 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-scripts\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.848662 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data-custom\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.857789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:05 crc kubenswrapper[4823]: I0121 17:38:05.863438 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkvz\" (UniqueName: \"kubernetes.io/projected/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-kube-api-access-rtkvz\") pod \"cinder-api-0\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " pod="openstack/cinder-api-0" Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.054735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" event={"ID":"c127d68c-e927-419c-a632-2a85db61e595","Type":"ContainerStarted","Data":"2b4524693d4430304a628fdae886c602b3a9bd79e10cb52dc82108d1d3b7dc2a"} Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.058566 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78cfb466c-qccf2" event={"ID":"0509a27d-cceb-45b3-9595-b5e5489a3934","Type":"ContainerStarted","Data":"af11cbd30961d2fa06328c378b6b49bb50bc7b044dd07c44b9b599f917f29853"} Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.062711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c955fd8fb-p27tx" event={"ID":"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f","Type":"ContainerStarted","Data":"2582ee4c49f23e07ef6d9d8229487309d4b6843e3386562053746310ff02c020"} Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.066092 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerStarted","Data":"65b24ecf8354486e28d27e1731d64ccc31081f63c7acd6151ff72319b2d62bfb"} Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.086524 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-666c84c45d-q2ttq" podStartSLOduration=3.794539324 podStartE2EDuration="9.086482803s" podCreationTimestamp="2026-01-21 17:37:57 +0000 UTC" firstStartedPulling="2026-01-21 17:37:58.801162147 +0000 UTC m=+1279.727293007" lastFinishedPulling="2026-01-21 17:38:04.093105626 +0000 UTC m=+1285.019236486" observedRunningTime="2026-01-21 17:38:06.077299517 +0000 UTC m=+1287.003430377" watchObservedRunningTime="2026-01-21 17:38:06.086482803 +0000 UTC m=+1287.012613663" Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.119592 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78cfb466c-qccf2" podStartSLOduration=3.760052912 podStartE2EDuration="9.11957024s" podCreationTimestamp="2026-01-21 17:37:57 +0000 UTC" firstStartedPulling="2026-01-21 17:37:58.731451106 +0000 UTC m=+1279.657581956" lastFinishedPulling="2026-01-21 17:38:04.090968424 +0000 UTC m=+1285.017099284" observedRunningTime="2026-01-21 17:38:06.10947378 +0000 UTC m=+1287.035604650" watchObservedRunningTime="2026-01-21 17:38:06.11957024 +0000 UTC m=+1287.045701100" Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.139346 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.733705 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-z9rh9"] Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.782143 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:06 crc kubenswrapper[4823]: I0121 17:38:06.987787 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:07 crc kubenswrapper[4823]: I0121 17:38:07.132011 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1353050-fc17-4adc-827e-0eb14c17623d","Type":"ContainerStarted","Data":"9bac02db1b02ef95ffe85f72952528681757f459540c6cee71760231b20f36f7"} Jan 21 17:38:07 crc kubenswrapper[4823]: I0121 17:38:07.186840 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" event={"ID":"be8be361-2dd3-4515-83ee-509176ed3eb9","Type":"ContainerStarted","Data":"a90641493e806eed74310c43899621fe3d4251430fe4e313074b759407d3acf3"} Jan 21 17:38:07 crc kubenswrapper[4823]: I0121 17:38:07.203972 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d35c9d8-f822-4491-a13c-4b2faa8c3d01","Type":"ContainerStarted","Data":"beb964e938a2a61e41a5969feee0ed6a1b8269c3bedadf8fa5ee9a9f6a3798ac"} Jan 21 17:38:07 crc kubenswrapper[4823]: I0121 17:38:07.204170 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="dnsmasq-dns" containerID="cri-o://5d8ad805b33b8d24a9a7998b83529872ac233b4433770d1af169eb557abdc753" gracePeriod=10 Jan 21 17:38:07 crc kubenswrapper[4823]: I0121 17:38:07.378525 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42280059-4e27-4de8-ace4-aeb184783f74" path="/var/lib/kubelet/pods/42280059-4e27-4de8-ace4-aeb184783f74/volumes" Jan 21 17:38:08 crc kubenswrapper[4823]: I0121 17:38:08.215962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerStarted","Data":"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa"} Jan 21 17:38:08 crc kubenswrapper[4823]: I0121 17:38:08.218692 4823 generic.go:334] "Generic (PLEG): container finished" podID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerID="9bdbd8dbcff4be15445e291de88aae0aaa2bf29fecc871dc52d140cbd8e40c78" exitCode=137 Jan 21 17:38:08 crc kubenswrapper[4823]: I0121 17:38:08.218796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64bdb4885b-ddprk" event={"ID":"7890c9eb-67a6-4c41-af5b-c57f0fddc533","Type":"ContainerDied","Data":"9bdbd8dbcff4be15445e291de88aae0aaa2bf29fecc871dc52d140cbd8e40c78"} Jan 21 17:38:08 crc kubenswrapper[4823]: I0121 17:38:08.221160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c955fd8fb-p27tx" event={"ID":"f6a34862-ff4b-4bc6-ba5c-803fdaeb722f","Type":"ContainerStarted","Data":"7371d35f48dcdc7e51a01f4e653ac8e42b4bb77ce92de855ac242a49be948aeb"} Jan 21 17:38:08 crc kubenswrapper[4823]: I0121 17:38:08.481019 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:09 crc kubenswrapper[4823]: I0121 17:38:09.144049 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:38:09 crc kubenswrapper[4823]: I0121 17:38:09.234283 4823 generic.go:334] "Generic (PLEG): container finished" podID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerID="5d8ad805b33b8d24a9a7998b83529872ac233b4433770d1af169eb557abdc753" exitCode=0 Jan 21 17:38:09 crc kubenswrapper[4823]: I0121 17:38:09.234383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" event={"ID":"04e9985d-1c05-40ec-8cae-ed502baa16f5","Type":"ContainerDied","Data":"5d8ad805b33b8d24a9a7998b83529872ac233b4433770d1af169eb557abdc753"} Jan 21 17:38:09 crc kubenswrapper[4823]: I0121 17:38:09.234912 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:09 crc kubenswrapper[4823]: I0121 17:38:09.321471 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c955fd8fb-p27tx" podStartSLOduration=8.321446778 podStartE2EDuration="8.321446778s" podCreationTimestamp="2026-01-21 17:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:09.308303073 +0000 UTC m=+1290.234433933" watchObservedRunningTime="2026-01-21 17:38:09.321446778 +0000 UTC m=+1290.247577638" Jan 21 17:38:09 crc kubenswrapper[4823]: E0121 17:38:09.709905 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe8be361_2dd3_4515_83ee_509176ed3eb9.slice/crio-b1eeeddf6630256db257081ce06a11f270b03b21d79fbf2ac111acf799ef38b6.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.172831 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.193174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-tls-certs\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.193426 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7890c9eb-67a6-4c41-af5b-c57f0fddc533-logs\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.193531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-secret-key\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.193660 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-config-data\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.193739 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-combined-ca-bundle\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.194003 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw759\" (UniqueName: \"kubernetes.io/projected/7890c9eb-67a6-4c41-af5b-c57f0fddc533-kube-api-access-jw759\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.194111 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-scripts\") pod \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\" (UID: \"7890c9eb-67a6-4c41-af5b-c57f0fddc533\") " Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.196389 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7890c9eb-67a6-4c41-af5b-c57f0fddc533-logs" (OuterVolumeSpecName: "logs") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.203047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.214120 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7890c9eb-67a6-4c41-af5b-c57f0fddc533-kube-api-access-jw759" (OuterVolumeSpecName: "kube-api-access-jw759") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "kube-api-access-jw759". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.226140 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-scripts" (OuterVolumeSpecName: "scripts") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.238984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-config-data" (OuterVolumeSpecName: "config-data") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.239055 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.260042 4823 generic.go:334] "Generic (PLEG): container finished" podID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerID="b1eeeddf6630256db257081ce06a11f270b03b21d79fbf2ac111acf799ef38b6" exitCode=0 Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.260147 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" event={"ID":"be8be361-2dd3-4515-83ee-509176ed3eb9","Type":"ContainerDied","Data":"b1eeeddf6630256db257081ce06a11f270b03b21d79fbf2ac111acf799ef38b6"} Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.269502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64bdb4885b-ddprk" event={"ID":"7890c9eb-67a6-4c41-af5b-c57f0fddc533","Type":"ContainerDied","Data":"9cc57cb183057b6b3f399509fb7b9eba9b5df1c0157b1c131a8441959ec60e59"} Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.269669 4823 scope.go:117] "RemoveContainer" containerID="4e46d9e6739ac4d11e1c17a1a18eb7088ea089e47013703887983e60a3eea237" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.270058 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64bdb4885b-ddprk" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.270633 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297126 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7890c9eb-67a6-4c41-af5b-c57f0fddc533" (UID: "7890c9eb-67a6-4c41-af5b-c57f0fddc533"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297145 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7890c9eb-67a6-4c41-af5b-c57f0fddc533-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297548 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297567 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297585 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297600 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw759\" (UniqueName: \"kubernetes.io/projected/7890c9eb-67a6-4c41-af5b-c57f0fddc533-kube-api-access-jw759\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.297614 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7890c9eb-67a6-4c41-af5b-c57f0fddc533-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.399526 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7890c9eb-67a6-4c41-af5b-c57f0fddc533-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.607094 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64bdb4885b-ddprk"] Jan 21 17:38:10 crc kubenswrapper[4823]: I0121 17:38:10.616117 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64bdb4885b-ddprk"] Jan 21 17:38:11 crc kubenswrapper[4823]: I0121 17:38:11.147593 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:38:11 crc kubenswrapper[4823]: I0121 17:38:11.354673 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" path="/var/lib/kubelet/pods/7890c9eb-67a6-4c41-af5b-c57f0fddc533/volumes" Jan 21 17:38:12 crc kubenswrapper[4823]: I0121 17:38:12.088392 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:38:12 crc kubenswrapper[4823]: I0121 17:38:12.092535 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:38:12 crc kubenswrapper[4823]: I0121 17:38:12.226072 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.093929 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.313659 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d35c9d8-f822-4491-a13c-4b2faa8c3d01","Type":"ContainerStarted","Data":"c89ca59243fdfb627d17178647127f44d37129767c8cc6f577297bc0e41744aa"} Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.321117 4823 scope.go:117] "RemoveContainer" containerID="9bdbd8dbcff4be15445e291de88aae0aaa2bf29fecc871dc52d140cbd8e40c78" Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.737008 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.888620 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-svc\") pod \"04e9985d-1c05-40ec-8cae-ed502baa16f5\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.888682 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-config\") pod \"04e9985d-1c05-40ec-8cae-ed502baa16f5\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.888717 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6npg\" (UniqueName: \"kubernetes.io/projected/04e9985d-1c05-40ec-8cae-ed502baa16f5-kube-api-access-h6npg\") pod \"04e9985d-1c05-40ec-8cae-ed502baa16f5\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.888770 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-nb\") pod \"04e9985d-1c05-40ec-8cae-ed502baa16f5\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.889023 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-swift-storage-0\") pod \"04e9985d-1c05-40ec-8cae-ed502baa16f5\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.889056 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-sb\") pod \"04e9985d-1c05-40ec-8cae-ed502baa16f5\" (UID: \"04e9985d-1c05-40ec-8cae-ed502baa16f5\") " Jan 21 17:38:13 crc kubenswrapper[4823]: I0121 17:38:13.921082 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e9985d-1c05-40ec-8cae-ed502baa16f5-kube-api-access-h6npg" (OuterVolumeSpecName: "kube-api-access-h6npg") pod "04e9985d-1c05-40ec-8cae-ed502baa16f5" (UID: "04e9985d-1c05-40ec-8cae-ed502baa16f5"). InnerVolumeSpecName "kube-api-access-h6npg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.001980 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6npg\" (UniqueName: \"kubernetes.io/projected/04e9985d-1c05-40ec-8cae-ed502baa16f5-kube-api-access-h6npg\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.029674 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-config" (OuterVolumeSpecName: "config") pod "04e9985d-1c05-40ec-8cae-ed502baa16f5" (UID: "04e9985d-1c05-40ec-8cae-ed502baa16f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.042482 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "04e9985d-1c05-40ec-8cae-ed502baa16f5" (UID: "04e9985d-1c05-40ec-8cae-ed502baa16f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.073505 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "04e9985d-1c05-40ec-8cae-ed502baa16f5" (UID: "04e9985d-1c05-40ec-8cae-ed502baa16f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.073583 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "04e9985d-1c05-40ec-8cae-ed502baa16f5" (UID: "04e9985d-1c05-40ec-8cae-ed502baa16f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.095501 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04e9985d-1c05-40ec-8cae-ed502baa16f5" (UID: "04e9985d-1c05-40ec-8cae-ed502baa16f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.118314 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.118498 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.118554 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.118627 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.118688 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04e9985d-1c05-40ec-8cae-ed502baa16f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.178629 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c955fd8fb-p27tx" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.266972 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85f89f98cd-9zzvq"] Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.267346 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" containerID="cri-o://ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b" gracePeriod=30 Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.267867 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api" containerID="cri-o://f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d" gracePeriod=30 Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.380241 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": EOF" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.448017 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" event={"ID":"be8be361-2dd3-4515-83ee-509176ed3eb9","Type":"ContainerStarted","Data":"93363557d18397df2c98de6c06c1b60e298c77c92e8bb189d84b97673ce87e58"} Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.448441 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.468763 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" event={"ID":"04e9985d-1c05-40ec-8cae-ed502baa16f5","Type":"ContainerDied","Data":"a72d6c9d4f2ed157b89279a5ff010728a72312debc5789b75635d8ed089c613f"} Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.468825 4823 scope.go:117] "RemoveContainer" containerID="5d8ad805b33b8d24a9a7998b83529872ac233b4433770d1af169eb557abdc753" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.468961 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z654m" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.492116 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" podStartSLOduration=9.492094574 podStartE2EDuration="9.492094574s" podCreationTimestamp="2026-01-21 17:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:14.475209448 +0000 UTC m=+1295.401340308" watchObservedRunningTime="2026-01-21 17:38:14.492094574 +0000 UTC m=+1295.418225434" Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.552940 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z654m"] Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.566098 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z654m"] Jan 21 17:38:14 crc kubenswrapper[4823]: I0121 17:38:14.572638 4823 scope.go:117] "RemoveContainer" containerID="53d506d064d6c42a2894d2ec6f823a2fc89c43388ae7558ace314ab65511d212" Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.071614 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.072280 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.381910 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" path="/var/lib/kubelet/pods/04e9985d-1c05-40ec-8cae-ed502baa16f5/volumes" Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.557540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerStarted","Data":"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f"} Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.557962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerStarted","Data":"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49"} Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.561280 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1353050-fc17-4adc-827e-0eb14c17623d","Type":"ContainerStarted","Data":"882d101df45f194da70c92d46a175496a3dd15b8cd33a4ae44f0054ed2f5b48e"} Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.564382 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"059733a2-b933-471e-b40b-3618874187a0","Type":"ContainerStarted","Data":"2ddf0eca966ca18e3393020c93a4b161c2c865dcc5888e2d4f0e8b96af36e2d1"} Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.568201 4823 generic.go:334] "Generic (PLEG): container finished" podID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerID="ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b" exitCode=143 Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.568268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85f89f98cd-9zzvq" event={"ID":"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85","Type":"ContainerDied","Data":"ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b"} Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.571542 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api-log" containerID="cri-o://c89ca59243fdfb627d17178647127f44d37129767c8cc6f577297bc0e41744aa" gracePeriod=30 Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.571767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d35c9d8-f822-4491-a13c-4b2faa8c3d01","Type":"ContainerStarted","Data":"2b3af23d6abbde90f3fa65bd663e913d5eadd6edd2da71b26b143190f993db32"} Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.571803 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.571829 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api" containerID="cri-o://2b3af23d6abbde90f3fa65bd663e913d5eadd6edd2da71b26b143190f993db32" gracePeriod=30 Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.584766 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.585591152 podStartE2EDuration="40.584737516s" podCreationTimestamp="2026-01-21 17:37:35 +0000 UTC" firstStartedPulling="2026-01-21 17:37:36.756823136 +0000 UTC m=+1257.682953996" lastFinishedPulling="2026-01-21 17:38:13.75596951 +0000 UTC m=+1294.682100360" observedRunningTime="2026-01-21 17:38:15.583357612 +0000 UTC m=+1296.509488472" watchObservedRunningTime="2026-01-21 17:38:15.584737516 +0000 UTC m=+1296.510868366" Jan 21 17:38:15 crc kubenswrapper[4823]: I0121 17:38:15.612123 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.612102261 podStartE2EDuration="10.612102261s" podCreationTimestamp="2026-01-21 17:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:15.611464615 +0000 UTC m=+1296.537595475" watchObservedRunningTime="2026-01-21 17:38:15.612102261 +0000 UTC m=+1296.538233121" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.597634 4823 generic.go:334] "Generic (PLEG): container finished" podID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerID="2b3af23d6abbde90f3fa65bd663e913d5eadd6edd2da71b26b143190f993db32" exitCode=0 Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.598203 4823 generic.go:334] "Generic (PLEG): container finished" podID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerID="c89ca59243fdfb627d17178647127f44d37129767c8cc6f577297bc0e41744aa" exitCode=143 Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.597732 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d35c9d8-f822-4491-a13c-4b2faa8c3d01","Type":"ContainerDied","Data":"2b3af23d6abbde90f3fa65bd663e913d5eadd6edd2da71b26b143190f993db32"} Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.598340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d35c9d8-f822-4491-a13c-4b2faa8c3d01","Type":"ContainerDied","Data":"c89ca59243fdfb627d17178647127f44d37129767c8cc6f577297bc0e41744aa"} Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.598424 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d35c9d8-f822-4491-a13c-4b2faa8c3d01","Type":"ContainerDied","Data":"beb964e938a2a61e41a5969feee0ed6a1b8269c3bedadf8fa5ee9a9f6a3798ac"} Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.598441 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb964e938a2a61e41a5969feee0ed6a1b8269c3bedadf8fa5ee9a9f6a3798ac" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.602132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1353050-fc17-4adc-827e-0eb14c17623d","Type":"ContainerStarted","Data":"85091161a8f4867eb1b3bfcda5a21c3f4e6ca2039f97ca412b7d74b0c73a12f6"} Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.652594 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.747088371 podStartE2EDuration="11.652567675s" podCreationTimestamp="2026-01-21 17:38:05 +0000 UTC" firstStartedPulling="2026-01-21 17:38:06.853178042 +0000 UTC m=+1287.779308902" lastFinishedPulling="2026-01-21 17:38:13.758657346 +0000 UTC m=+1294.684788206" observedRunningTime="2026-01-21 17:38:16.640299273 +0000 UTC m=+1297.566430133" watchObservedRunningTime="2026-01-21 17:38:16.652567675 +0000 UTC m=+1297.578698535" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.760088 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933670 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933717 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-logs\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-combined-ca-bundle\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933786 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-etc-machine-id\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933880 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data-custom\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-scripts\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.933951 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtkvz\" (UniqueName: \"kubernetes.io/projected/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-kube-api-access-rtkvz\") pod \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\" (UID: \"2d35c9d8-f822-4491-a13c-4b2faa8c3d01\") " Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.934917 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.935165 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-logs" (OuterVolumeSpecName: "logs") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.943940 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.961050 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-kube-api-access-rtkvz" (OuterVolumeSpecName: "kube-api-access-rtkvz") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "kube-api-access-rtkvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.965212 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-scripts" (OuterVolumeSpecName: "scripts") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:16 crc kubenswrapper[4823]: I0121 17:38:16.987204 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.024506 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data" (OuterVolumeSpecName: "config-data") pod "2d35c9d8-f822-4491-a13c-4b2faa8c3d01" (UID: "2d35c9d8-f822-4491-a13c-4b2faa8c3d01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035453 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035492 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035502 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtkvz\" (UniqueName: \"kubernetes.io/projected/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-kube-api-access-rtkvz\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035512 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035521 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035531 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.035540 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d35c9d8-f822-4491-a13c-4b2faa8c3d01-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.613118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerStarted","Data":"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337"} Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.613177 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.613797 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.643617 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.094705546 podStartE2EDuration="14.643597619s" podCreationTimestamp="2026-01-21 17:38:03 +0000 UTC" firstStartedPulling="2026-01-21 17:38:05.007006097 +0000 UTC m=+1285.933136947" lastFinishedPulling="2026-01-21 17:38:16.55589817 +0000 UTC m=+1297.482029020" observedRunningTime="2026-01-21 17:38:17.638487252 +0000 UTC m=+1298.564618122" watchObservedRunningTime="2026-01-21 17:38:17.643597619 +0000 UTC m=+1298.569728479" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.673799 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.683335 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.700912 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:17 crc kubenswrapper[4823]: E0121 17:38:17.701495 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon-log" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701523 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon-log" Jan 21 17:38:17 crc kubenswrapper[4823]: E0121 17:38:17.701551 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="dnsmasq-dns" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701561 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="dnsmasq-dns" Jan 21 17:38:17 crc kubenswrapper[4823]: E0121 17:38:17.701580 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="init" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701589 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="init" Jan 21 17:38:17 crc kubenswrapper[4823]: E0121 17:38:17.701605 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701612 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" Jan 21 17:38:17 crc kubenswrapper[4823]: E0121 17:38:17.701639 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api-log" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701647 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api-log" Jan 21 17:38:17 crc kubenswrapper[4823]: E0121 17:38:17.701673 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701683 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701928 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701949 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701966 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e9985d-1c05-40ec-8cae-ed502baa16f5" containerName="dnsmasq-dns" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701981 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" containerName="cinder-api-log" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.701992 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7890c9eb-67a6-4c41-af5b-c57f0fddc533" containerName="horizon-log" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.703422 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.705523 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.705899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.706130 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.709225 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770686 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770759 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-scripts\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-logs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770839 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-config-data-custom\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770882 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-config-data\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8cr\" (UniqueName: \"kubernetes.io/projected/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-kube-api-access-gw8cr\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.770939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.771037 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.771085 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.872816 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-scripts\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-logs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874193 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-config-data-custom\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-config-data\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874364 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw8cr\" (UniqueName: \"kubernetes.io/projected/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-kube-api-access-gw8cr\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874449 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874553 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874686 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.874646 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-logs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.879179 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.879341 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-scripts\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.879633 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.881974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-config-data\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.886551 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.894879 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-config-data-custom\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:17 crc kubenswrapper[4823]: I0121 17:38:17.896747 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw8cr\" (UniqueName: \"kubernetes.io/projected/ded7b85d-f0d3-4e9b-b121-cadd9b8488b6-kube-api-access-gw8cr\") pod \"cinder-api-0\" (UID: \"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6\") " pod="openstack/cinder-api-0" Jan 21 17:38:18 crc kubenswrapper[4823]: I0121 17:38:18.093079 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 17:38:18 crc kubenswrapper[4823]: I0121 17:38:18.636418 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 17:38:18 crc kubenswrapper[4823]: W0121 17:38:18.642567 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded7b85d_f0d3_4e9b_b121_cadd9b8488b6.slice/crio-9518f99a08f3a029152c5617f2ff22f7d798155340f931d073eb5448803ccad0 WatchSource:0}: Error finding container 9518f99a08f3a029152c5617f2ff22f7d798155340f931d073eb5448803ccad0: Status 404 returned error can't find the container with id 9518f99a08f3a029152c5617f2ff22f7d798155340f931d073eb5448803ccad0 Jan 21 17:38:18 crc kubenswrapper[4823]: I0121 17:38:18.845568 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": read tcp 10.217.0.2:57776->10.217.0.180:9311: read: connection reset by peer" Jan 21 17:38:18 crc kubenswrapper[4823]: I0121 17:38:18.845580 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85f89f98cd-9zzvq" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": read tcp 10.217.0.2:57780->10.217.0.180:9311: read: connection reset by peer" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.416432 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d35c9d8-f822-4491-a13c-4b2faa8c3d01" path="/var/lib/kubelet/pods/2d35c9d8-f822-4491-a13c-4b2faa8c3d01/volumes" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.429576 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.429831 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-log" containerID="cri-o://7b680b277188a67080df784b710b84edfecab7659e5fd84706112f335574955f" gracePeriod=30 Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.430007 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-httpd" containerID="cri-o://60f007dd250764f75914cb2570b765d2f1f81f90b83ba35bdbec8ec14db35769" gracePeriod=30 Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.534387 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.634696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6","Type":"ContainerStarted","Data":"9025c6b91b34cd54306eb956ccc0999f986ead5b6e70ced950ea221c0161989b"} Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.634780 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6","Type":"ContainerStarted","Data":"9518f99a08f3a029152c5617f2ff22f7d798155340f931d073eb5448803ccad0"} Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.636746 4823 generic.go:334] "Generic (PLEG): container finished" podID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerID="f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d" exitCode=0 Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.636816 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85f89f98cd-9zzvq" event={"ID":"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85","Type":"ContainerDied","Data":"f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d"} Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.636871 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85f89f98cd-9zzvq" event={"ID":"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85","Type":"ContainerDied","Data":"8b925c49c78e9a5d3857a4bdc6ad11834c74d5745a8634083ff612c13dd43e4b"} Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.636891 4823 scope.go:117] "RemoveContainer" containerID="f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.637036 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85f89f98cd-9zzvq" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.640122 4823 generic.go:334] "Generic (PLEG): container finished" podID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerID="7b680b277188a67080df784b710b84edfecab7659e5fd84706112f335574955f" exitCode=143 Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.640190 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9f3f6704-aa00-4387-9410-564e0cf95d93","Type":"ContainerDied","Data":"7b680b277188a67080df784b710b84edfecab7659e5fd84706112f335574955f"} Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.713641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r55j9\" (UniqueName: \"kubernetes.io/projected/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-kube-api-access-r55j9\") pod \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.713705 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-logs\") pod \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.713802 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data-custom\") pod \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.713841 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-combined-ca-bundle\") pod \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.713906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data\") pod \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\" (UID: \"84dc38f6-1bcf-46e3-b82b-aa5f056c1b85\") " Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.714435 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-logs" (OuterVolumeSpecName: "logs") pod "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" (UID: "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.718598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" (UID: "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.722200 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-kube-api-access-r55j9" (OuterVolumeSpecName: "kube-api-access-r55j9") pod "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" (UID: "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85"). InnerVolumeSpecName "kube-api-access-r55j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.778088 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" (UID: "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.808230 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data" (OuterVolumeSpecName: "config-data") pod "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" (UID: "84dc38f6-1bcf-46e3-b82b-aa5f056c1b85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.815781 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.815831 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.815845 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.816383 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r55j9\" (UniqueName: \"kubernetes.io/projected/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-kube-api-access-r55j9\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.816402 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.934307 4823 scope.go:117] "RemoveContainer" containerID="ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.968520 4823 scope.go:117] "RemoveContainer" containerID="f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d" Jan 21 17:38:19 crc kubenswrapper[4823]: E0121 17:38:19.969155 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d\": container with ID starting with f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d not found: ID does not exist" containerID="f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.969187 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d"} err="failed to get container status \"f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d\": rpc error: code = NotFound desc = could not find container \"f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d\": container with ID starting with f90499ba8523f9824d5f42d0751460543e9e2ab601f758d32ba8669ff05f009d not found: ID does not exist" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.969209 4823 scope.go:117] "RemoveContainer" containerID="ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b" Jan 21 17:38:19 crc kubenswrapper[4823]: E0121 17:38:19.969642 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b\": container with ID starting with ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b not found: ID does not exist" containerID="ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.969664 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b"} err="failed to get container status \"ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b\": rpc error: code = NotFound desc = could not find container \"ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b\": container with ID starting with ec9ffc30e2144b4f02c0a0f389774810d6db700890a07dac35e09a7c714a807b not found: ID does not exist" Jan 21 17:38:19 crc kubenswrapper[4823]: I0121 17:38:19.990610 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85f89f98cd-9zzvq"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.013761 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85f89f98cd-9zzvq"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.493210 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vsz94"] Jan 21 17:38:20 crc kubenswrapper[4823]: E0121 17:38:20.494159 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.494181 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" Jan 21 17:38:20 crc kubenswrapper[4823]: E0121 17:38:20.494208 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.494217 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.494412 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.494444 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" containerName="barbican-api-log" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.495148 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.507315 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vsz94"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.535096 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373810fa-4b13-4036-99a4-3b1f4d02c0cf-operator-scripts\") pod \"nova-api-db-create-vsz94\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.535148 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdcmh\" (UniqueName: \"kubernetes.io/projected/373810fa-4b13-4036-99a4-3b1f4d02c0cf-kube-api-access-gdcmh\") pod \"nova-api-db-create-vsz94\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.579300 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kbsvz"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.580965 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.587835 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kbsvz"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.623581 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2a23-account-create-update-vbrvj"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.625147 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.629619 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.638330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmxn\" (UniqueName: \"kubernetes.io/projected/45f9b916-5570-4624-822e-587591152bfe-kube-api-access-hqmxn\") pod \"nova-cell0-db-create-kbsvz\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.638443 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cbj2\" (UniqueName: \"kubernetes.io/projected/ad35032d-b4a7-40e6-b249-ed6fda0b6917-kube-api-access-8cbj2\") pod \"nova-api-2a23-account-create-update-vbrvj\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.638515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35032d-b4a7-40e6-b249-ed6fda0b6917-operator-scripts\") pod \"nova-api-2a23-account-create-update-vbrvj\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.638553 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f9b916-5570-4624-822e-587591152bfe-operator-scripts\") pod \"nova-cell0-db-create-kbsvz\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.638608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373810fa-4b13-4036-99a4-3b1f4d02c0cf-operator-scripts\") pod \"nova-api-db-create-vsz94\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.638629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdcmh\" (UniqueName: \"kubernetes.io/projected/373810fa-4b13-4036-99a4-3b1f4d02c0cf-kube-api-access-gdcmh\") pod \"nova-api-db-create-vsz94\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.641836 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2a23-account-create-update-vbrvj"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.643419 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373810fa-4b13-4036-99a4-3b1f4d02c0cf-operator-scripts\") pod \"nova-api-db-create-vsz94\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.644425 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.670244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdcmh\" (UniqueName: \"kubernetes.io/projected/373810fa-4b13-4036-99a4-3b1f4d02c0cf-kube-api-access-gdcmh\") pod \"nova-api-db-create-vsz94\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.679333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ded7b85d-f0d3-4e9b-b121-cadd9b8488b6","Type":"ContainerStarted","Data":"80af70b982037c7f3b58b5e599e5ec7ec98aa3b97b60475512b8d0a659363ea0"} Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.679488 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.730678 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vxptb"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.732518 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.733033 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.778214 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.778190936 podStartE2EDuration="3.778190936s" podCreationTimestamp="2026-01-21 17:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:20.7192075 +0000 UTC m=+1301.645338360" watchObservedRunningTime="2026-01-21 17:38:20.778190936 +0000 UTC m=+1301.704321796" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.781799 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bzr\" (UniqueName: \"kubernetes.io/projected/d5e444ec-f927-4a23-8437-3e3b06ab3498-kube-api-access-z8bzr\") pod \"nova-cell1-db-create-vxptb\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.782232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmxn\" (UniqueName: \"kubernetes.io/projected/45f9b916-5570-4624-822e-587591152bfe-kube-api-access-hqmxn\") pod \"nova-cell0-db-create-kbsvz\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.782427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cbj2\" (UniqueName: \"kubernetes.io/projected/ad35032d-b4a7-40e6-b249-ed6fda0b6917-kube-api-access-8cbj2\") pod \"nova-api-2a23-account-create-update-vbrvj\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.782583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35032d-b4a7-40e6-b249-ed6fda0b6917-operator-scripts\") pod \"nova-api-2a23-account-create-update-vbrvj\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.782666 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e444ec-f927-4a23-8437-3e3b06ab3498-operator-scripts\") pod \"nova-cell1-db-create-vxptb\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.782714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f9b916-5570-4624-822e-587591152bfe-operator-scripts\") pod \"nova-cell0-db-create-kbsvz\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.783568 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f9b916-5570-4624-822e-587591152bfe-operator-scripts\") pod \"nova-cell0-db-create-kbsvz\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.793757 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35032d-b4a7-40e6-b249-ed6fda0b6917-operator-scripts\") pod \"nova-api-2a23-account-create-update-vbrvj\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.813701 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmxn\" (UniqueName: \"kubernetes.io/projected/45f9b916-5570-4624-822e-587591152bfe-kube-api-access-hqmxn\") pod \"nova-cell0-db-create-kbsvz\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.819480 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cbj2\" (UniqueName: \"kubernetes.io/projected/ad35032d-b4a7-40e6-b249-ed6fda0b6917-kube-api-access-8cbj2\") pod \"nova-api-2a23-account-create-update-vbrvj\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.822598 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.899719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e444ec-f927-4a23-8437-3e3b06ab3498-operator-scripts\") pod \"nova-cell1-db-create-vxptb\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.900113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bzr\" (UniqueName: \"kubernetes.io/projected/d5e444ec-f927-4a23-8437-3e3b06ab3498-kube-api-access-z8bzr\") pod \"nova-cell1-db-create-vxptb\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.903082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e444ec-f927-4a23-8437-3e3b06ab3498-operator-scripts\") pod \"nova-cell1-db-create-vxptb\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.903570 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.926439 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bzr\" (UniqueName: \"kubernetes.io/projected/d5e444ec-f927-4a23-8437-3e3b06ab3498-kube-api-access-z8bzr\") pod \"nova-cell1-db-create-vxptb\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.956789 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vxptb"] Jan 21 17:38:20 crc kubenswrapper[4823]: I0121 17:38:20.969320 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.024187 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3c8d-account-create-update-2cn9n"] Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.025956 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.034281 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.096489 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r9rjd"] Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.096771 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerName="dnsmasq-dns" containerID="cri-o://c69e30eb1de8ad194b9af2d7b29831da6ac5c222aa3bf25d58243d2015cd14d0" gracePeriod=10 Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.109176 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.113124 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb1376bd-dda3-4f8a-93df-1c699366f12e-operator-scripts\") pod \"nova-cell0-3c8d-account-create-update-2cn9n\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.113205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42fn\" (UniqueName: \"kubernetes.io/projected/fb1376bd-dda3-4f8a-93df-1c699366f12e-kube-api-access-k42fn\") pod \"nova-cell0-3c8d-account-create-update-2cn9n\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.123662 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c8d-account-create-update-2cn9n"] Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.156139 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d17a-account-create-update-pvshz"] Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.158373 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.170887 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d17a-account-create-update-pvshz"] Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.175770 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.222280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb1376bd-dda3-4f8a-93df-1c699366f12e-operator-scripts\") pod \"nova-cell0-3c8d-account-create-update-2cn9n\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.222373 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42fn\" (UniqueName: \"kubernetes.io/projected/fb1376bd-dda3-4f8a-93df-1c699366f12e-kube-api-access-k42fn\") pod \"nova-cell0-3c8d-account-create-update-2cn9n\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.222471 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz57h\" (UniqueName: \"kubernetes.io/projected/72025c1d-b829-47a6-90c0-9be0c98110cb-kube-api-access-zz57h\") pod \"nova-cell1-d17a-account-create-update-pvshz\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.223661 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb1376bd-dda3-4f8a-93df-1c699366f12e-operator-scripts\") pod \"nova-cell0-3c8d-account-create-update-2cn9n\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.231701 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72025c1d-b829-47a6-90c0-9be0c98110cb-operator-scripts\") pod \"nova-cell1-d17a-account-create-update-pvshz\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.246782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42fn\" (UniqueName: \"kubernetes.io/projected/fb1376bd-dda3-4f8a-93df-1c699366f12e-kube-api-access-k42fn\") pod \"nova-cell0-3c8d-account-create-update-2cn9n\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.333706 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz57h\" (UniqueName: \"kubernetes.io/projected/72025c1d-b829-47a6-90c0-9be0c98110cb-kube-api-access-zz57h\") pod \"nova-cell1-d17a-account-create-update-pvshz\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.333791 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72025c1d-b829-47a6-90c0-9be0c98110cb-operator-scripts\") pod \"nova-cell1-d17a-account-create-update-pvshz\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.334619 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72025c1d-b829-47a6-90c0-9be0c98110cb-operator-scripts\") pod \"nova-cell1-d17a-account-create-update-pvshz\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.359142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz57h\" (UniqueName: \"kubernetes.io/projected/72025c1d-b829-47a6-90c0-9be0c98110cb-kube-api-access-zz57h\") pod \"nova-cell1-d17a-account-create-update-pvshz\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.364704 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.400969 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84dc38f6-1bcf-46e3-b82b-aa5f056c1b85" path="/var/lib/kubelet/pods/84dc38f6-1bcf-46e3-b82b-aa5f056c1b85/volumes" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.410357 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.521201 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.532762 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vsz94"] Jan 21 17:38:21 crc kubenswrapper[4823]: W0121 17:38:21.585757 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod373810fa_4b13_4036_99a4_3b1f4d02c0cf.slice/crio-556959d3cd004ca6a137ea0977f1ae19772fa46df17e70bd1b36846bb481c369 WatchSource:0}: Error finding container 556959d3cd004ca6a137ea0977f1ae19772fa46df17e70bd1b36846bb481c369: Status 404 returned error can't find the container with id 556959d3cd004ca6a137ea0977f1ae19772fa46df17e70bd1b36846bb481c369 Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.827980 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kbsvz"] Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.877196 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vsz94" event={"ID":"373810fa-4b13-4036-99a4-3b1f4d02c0cf","Type":"ContainerStarted","Data":"556959d3cd004ca6a137ea0977f1ae19772fa46df17e70bd1b36846bb481c369"} Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.929796 4823 generic.go:334] "Generic (PLEG): container finished" podID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerID="c69e30eb1de8ad194b9af2d7b29831da6ac5c222aa3bf25d58243d2015cd14d0" exitCode=0 Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.929925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" event={"ID":"92e4d2b6-046c-42ff-afcc-2dda2abe61cc","Type":"ContainerDied","Data":"c69e30eb1de8ad194b9af2d7b29831da6ac5c222aa3bf25d58243d2015cd14d0"} Jan 21 17:38:21 crc kubenswrapper[4823]: I0121 17:38:21.931299 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2a23-account-create-update-vbrvj"] Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.096262 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:22 crc kubenswrapper[4823]: W0121 17:38:22.122888 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e444ec_f927_4a23_8437_3e3b06ab3498.slice/crio-6e1471bdf2cfa116870ca1d4c667c111097abde5eab8b388103b2fbb41efa81b WatchSource:0}: Error finding container 6e1471bdf2cfa116870ca1d4c667c111097abde5eab8b388103b2fbb41efa81b: Status 404 returned error can't find the container with id 6e1471bdf2cfa116870ca1d4c667c111097abde5eab8b388103b2fbb41efa81b Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.141924 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c8d-account-create-update-2cn9n"] Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.162453 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vxptb"] Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.229792 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.294263 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-svc\") pod \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.294724 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-nb\") pod \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.294765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-config\") pod \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.294925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-swift-storage-0\") pod \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.295024 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpfkg\" (UniqueName: \"kubernetes.io/projected/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-kube-api-access-kpfkg\") pod \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.295052 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-sb\") pod \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\" (UID: \"92e4d2b6-046c-42ff-afcc-2dda2abe61cc\") " Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.331669 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-kube-api-access-kpfkg" (OuterVolumeSpecName: "kube-api-access-kpfkg") pod "92e4d2b6-046c-42ff-afcc-2dda2abe61cc" (UID: "92e4d2b6-046c-42ff-afcc-2dda2abe61cc"). InnerVolumeSpecName "kube-api-access-kpfkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.403094 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpfkg\" (UniqueName: \"kubernetes.io/projected/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-kube-api-access-kpfkg\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.575953 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d17a-account-create-update-pvshz"] Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.704678 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92e4d2b6-046c-42ff-afcc-2dda2abe61cc" (UID: "92e4d2b6-046c-42ff-afcc-2dda2abe61cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.710871 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.751955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92e4d2b6-046c-42ff-afcc-2dda2abe61cc" (UID: "92e4d2b6-046c-42ff-afcc-2dda2abe61cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.761186 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92e4d2b6-046c-42ff-afcc-2dda2abe61cc" (UID: "92e4d2b6-046c-42ff-afcc-2dda2abe61cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.766714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92e4d2b6-046c-42ff-afcc-2dda2abe61cc" (UID: "92e4d2b6-046c-42ff-afcc-2dda2abe61cc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.805993 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-config" (OuterVolumeSpecName: "config") pod "92e4d2b6-046c-42ff-afcc-2dda2abe61cc" (UID: "92e4d2b6-046c-42ff-afcc-2dda2abe61cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.812961 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.812996 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.813005 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.813014 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92e4d2b6-046c-42ff-afcc-2dda2abe61cc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.814329 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.814786 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-central-agent" containerID="cri-o://5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" gracePeriod=30 Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.815149 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="sg-core" containerID="cri-o://6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" gracePeriod=30 Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.815269 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-notification-agent" containerID="cri-o://94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" gracePeriod=30 Jan 21 17:38:22 crc kubenswrapper[4823]: I0121 17:38:22.815368 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="proxy-httpd" containerID="cri-o://627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" gracePeriod=30 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.006322 4823 generic.go:334] "Generic (PLEG): container finished" podID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerID="60f007dd250764f75914cb2570b765d2f1f81f90b83ba35bdbec8ec14db35769" exitCode=0 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.006440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9f3f6704-aa00-4387-9410-564e0cf95d93","Type":"ContainerDied","Data":"60f007dd250764f75914cb2570b765d2f1f81f90b83ba35bdbec8ec14db35769"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.010813 4823 generic.go:334] "Generic (PLEG): container finished" podID="373810fa-4b13-4036-99a4-3b1f4d02c0cf" containerID="4390191b774bfd675bc80dd6323a80f156ddd947e8ae2e7bead435e1381c490b" exitCode=0 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.010923 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vsz94" event={"ID":"373810fa-4b13-4036-99a4-3b1f4d02c0cf","Type":"ContainerDied","Data":"4390191b774bfd675bc80dd6323a80f156ddd947e8ae2e7bead435e1381c490b"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.018098 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" event={"ID":"fb1376bd-dda3-4f8a-93df-1c699366f12e","Type":"ContainerStarted","Data":"b2512209aae2be45fddeca499b2a6cb4591a4e2ca2f1dfcf97d1dd7f68779ad7"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.026163 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" event={"ID":"fb1376bd-dda3-4f8a-93df-1c699366f12e","Type":"ContainerStarted","Data":"b55083fda110fd2a6f2b4727124992f70cbaaa64cff481dfad52ec7ba68c95ec"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.026930 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" event={"ID":"92e4d2b6-046c-42ff-afcc-2dda2abe61cc","Type":"ContainerDied","Data":"a59c6d894b444e69356596b55a8953e4b218df33b219ba1f7fe69d2e1874c280"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.027058 4823 scope.go:117] "RemoveContainer" containerID="c69e30eb1de8ad194b9af2d7b29831da6ac5c222aa3bf25d58243d2015cd14d0" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.027123 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.051988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2a23-account-create-update-vbrvj" event={"ID":"ad35032d-b4a7-40e6-b249-ed6fda0b6917","Type":"ContainerDied","Data":"a5f00f2d0772f25423f1ff5c448058598935e70ca5fd79990f5f0c9c91e899a7"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.052053 4823 generic.go:334] "Generic (PLEG): container finished" podID="ad35032d-b4a7-40e6-b249-ed6fda0b6917" containerID="a5f00f2d0772f25423f1ff5c448058598935e70ca5fd79990f5f0c9c91e899a7" exitCode=0 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.052208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2a23-account-create-update-vbrvj" event={"ID":"ad35032d-b4a7-40e6-b249-ed6fda0b6917","Type":"ContainerStarted","Data":"bc65c5b011b923ffa392b516f4ef847faeca2f6b04d95407cdc5bb87fa4b84ac"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.055331 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" event={"ID":"72025c1d-b829-47a6-90c0-9be0c98110cb","Type":"ContainerStarted","Data":"a314f28817268141840b8338f58c5bb08bf60a44fee21e503fa91a2db6abfb6f"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.062472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxptb" event={"ID":"d5e444ec-f927-4a23-8437-3e3b06ab3498","Type":"ContainerStarted","Data":"30187260f0d5086dcd5d88ffd23cd70132d2d71e5d1a98e711b487ce9a9f655b"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.062525 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxptb" event={"ID":"d5e444ec-f927-4a23-8437-3e3b06ab3498","Type":"ContainerStarted","Data":"6e1471bdf2cfa116870ca1d4c667c111097abde5eab8b388103b2fbb41efa81b"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.066201 4823 generic.go:334] "Generic (PLEG): container finished" podID="45f9b916-5570-4624-822e-587591152bfe" containerID="ffeb49fadcdb837d3003df585c8066b01b288503d22f905488030007e7abce13" exitCode=0 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.066389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kbsvz" event={"ID":"45f9b916-5570-4624-822e-587591152bfe","Type":"ContainerDied","Data":"ffeb49fadcdb837d3003df585c8066b01b288503d22f905488030007e7abce13"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.066438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kbsvz" event={"ID":"45f9b916-5570-4624-822e-587591152bfe","Type":"ContainerStarted","Data":"7518c036ff7d10961973679cb063759c8a34e976a9efedb941c4bfa2532b3b2e"} Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.066549 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="cinder-scheduler" containerID="cri-o://882d101df45f194da70c92d46a175496a3dd15b8cd33a4ae44f0054ed2f5b48e" gracePeriod=30 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.066713 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="probe" containerID="cri-o://85091161a8f4867eb1b3bfcda5a21c3f4e6ca2039f97ca412b7d74b0c73a12f6" gracePeriod=30 Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.078192 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" podStartSLOduration=3.078168378 podStartE2EDuration="3.078168378s" podCreationTimestamp="2026-01-21 17:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:23.050421494 +0000 UTC m=+1303.976552354" watchObservedRunningTime="2026-01-21 17:38:23.078168378 +0000 UTC m=+1304.004299238" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.119602 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-vxptb" podStartSLOduration=3.1195752900000002 podStartE2EDuration="3.11957529s" podCreationTimestamp="2026-01-21 17:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:23.113658104 +0000 UTC m=+1304.039788964" watchObservedRunningTime="2026-01-21 17:38:23.11957529 +0000 UTC m=+1304.045706150" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.388562 4823 scope.go:117] "RemoveContainer" containerID="f0be92cd34acaeceb2cfb696eff7aac413cb4a067ef7a42bc6b06e3eda6c3c2d" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.453249 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.546755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-scripts\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.546884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-logs\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.546928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.546950 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-config-data\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.547010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-public-tls-certs\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.547076 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-httpd-run\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.547127 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmcpp\" (UniqueName: \"kubernetes.io/projected/9f3f6704-aa00-4387-9410-564e0cf95d93-kube-api-access-kmcpp\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.547163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-combined-ca-bundle\") pod \"9f3f6704-aa00-4387-9410-564e0cf95d93\" (UID: \"9f3f6704-aa00-4387-9410-564e0cf95d93\") " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.550665 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-logs" (OuterVolumeSpecName: "logs") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.554933 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.562000 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-scripts" (OuterVolumeSpecName: "scripts") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.569411 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.577884 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3f6704-aa00-4387-9410-564e0cf95d93-kube-api-access-kmcpp" (OuterVolumeSpecName: "kube-api-access-kmcpp") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "kube-api-access-kmcpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.629176 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.642516 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-config-data" (OuterVolumeSpecName: "config-data") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.649882 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.649930 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmcpp\" (UniqueName: \"kubernetes.io/projected/9f3f6704-aa00-4387-9410-564e0cf95d93-kube-api-access-kmcpp\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.649945 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.649956 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.649967 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3f6704-aa00-4387-9410-564e0cf95d93-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.649995 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.650006 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.697657 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.712066 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9f3f6704-aa00-4387-9410-564e0cf95d93" (UID: "9f3f6704-aa00-4387-9410-564e0cf95d93"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.752556 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:23 crc kubenswrapper[4823]: I0121 17:38:23.752596 4823 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3f6704-aa00-4387-9410-564e0cf95d93-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.078401 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5e444ec-f927-4a23-8437-3e3b06ab3498" containerID="30187260f0d5086dcd5d88ffd23cd70132d2d71e5d1a98e711b487ce9a9f655b" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.078468 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxptb" event={"ID":"d5e444ec-f927-4a23-8437-3e3b06ab3498","Type":"ContainerDied","Data":"30187260f0d5086dcd5d88ffd23cd70132d2d71e5d1a98e711b487ce9a9f655b"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.081485 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.081514 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9f3f6704-aa00-4387-9410-564e0cf95d93","Type":"ContainerDied","Data":"dbecf7833200736ba035da758d1666031a6e8e36ec2f701076e5fd830d698d9a"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.081568 4823 scope.go:117] "RemoveContainer" containerID="60f007dd250764f75914cb2570b765d2f1f81f90b83ba35bdbec8ec14db35769" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.085869 4823 generic.go:334] "Generic (PLEG): container finished" podID="f1353050-fc17-4adc-827e-0eb14c17623d" containerID="85091161a8f4867eb1b3bfcda5a21c3f4e6ca2039f97ca412b7d74b0c73a12f6" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.085900 4823 generic.go:334] "Generic (PLEG): container finished" podID="f1353050-fc17-4adc-827e-0eb14c17623d" containerID="882d101df45f194da70c92d46a175496a3dd15b8cd33a4ae44f0054ed2f5b48e" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.085953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1353050-fc17-4adc-827e-0eb14c17623d","Type":"ContainerDied","Data":"85091161a8f4867eb1b3bfcda5a21c3f4e6ca2039f97ca412b7d74b0c73a12f6"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.085987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1353050-fc17-4adc-827e-0eb14c17623d","Type":"ContainerDied","Data":"882d101df45f194da70c92d46a175496a3dd15b8cd33a4ae44f0054ed2f5b48e"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.101262 4823 generic.go:334] "Generic (PLEG): container finished" podID="fb1376bd-dda3-4f8a-93df-1c699366f12e" containerID="b2512209aae2be45fddeca499b2a6cb4591a4e2ca2f1dfcf97d1dd7f68779ad7" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.101366 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" event={"ID":"fb1376bd-dda3-4f8a-93df-1c699366f12e","Type":"ContainerDied","Data":"b2512209aae2be45fddeca499b2a6cb4591a4e2ca2f1dfcf97d1dd7f68779ad7"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.120715 4823 scope.go:117] "RemoveContainer" containerID="7b680b277188a67080df784b710b84edfecab7659e5fd84706112f335574955f" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.135374 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.145385 4823 generic.go:334] "Generic (PLEG): container finished" podID="72025c1d-b829-47a6-90c0-9be0c98110cb" containerID="3efd16b176b2fddd96dca6cd196eaf1d81ce9a51d5457d2fafc70810f4df78ba" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.146011 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" event={"ID":"72025c1d-b829-47a6-90c0-9be0c98110cb","Type":"ContainerDied","Data":"3efd16b176b2fddd96dca6cd196eaf1d81ce9a51d5457d2fafc70810f4df78ba"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.147527 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.155249 4823 generic.go:334] "Generic (PLEG): container finished" podID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.155291 4823 generic.go:334] "Generic (PLEG): container finished" podID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" exitCode=2 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.155302 4823 generic.go:334] "Generic (PLEG): container finished" podID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.155311 4823 generic.go:334] "Generic (PLEG): container finished" podID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" exitCode=0 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.155582 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.156191 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerDied","Data":"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.156227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerDied","Data":"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.156245 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerDied","Data":"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.156259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerDied","Data":"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.156270 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67c2aec1-96fe-498d-9638-7b3fa2347f26","Type":"ContainerDied","Data":"65b24ecf8354486e28d27e1731d64ccc31081f63c7acd6151ff72319b2d62bfb"} Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.162901 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188040 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188057 4823 scope.go:117] "RemoveContainer" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188672 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerName="init" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188691 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerName="init" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188710 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="proxy-httpd" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188718 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="proxy-httpd" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188733 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-notification-agent" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188742 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-notification-agent" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188760 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-httpd" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188766 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-httpd" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188774 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-central-agent" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188780 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-central-agent" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188802 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="sg-core" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188807 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="sg-core" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188818 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-log" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188823 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-log" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.188832 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerName="dnsmasq-dns" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.188838 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerName="dnsmasq-dns" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189059 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="proxy-httpd" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189074 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="sg-core" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189088 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-log" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189102 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" containerName="glance-httpd" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189113 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-central-agent" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189125 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" containerName="dnsmasq-dns" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.189137 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" containerName="ceilometer-notification-agent" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.190324 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.200910 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.201131 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.224000 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.233307 4823 scope.go:117] "RemoveContainer" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267076 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr6gj\" (UniqueName: \"kubernetes.io/projected/67c2aec1-96fe-498d-9638-7b3fa2347f26-kube-api-access-cr6gj\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267137 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-scripts\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267313 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-run-httpd\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267454 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-sg-core-conf-yaml\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267489 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-log-httpd\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267536 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-config-data\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267573 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-combined-ca-bundle\") pod \"67c2aec1-96fe-498d-9638-7b3fa2347f26\" (UID: \"67c2aec1-96fe-498d-9638-7b3fa2347f26\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.267984 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phjx\" (UniqueName: \"kubernetes.io/projected/c47b64d8-740b-4759-98aa-9d52e87030ae-kube-api-access-9phjx\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268313 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c47b64d8-740b-4759-98aa-9d52e87030ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268412 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268427 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268552 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c47b64d8-740b-4759-98aa-9d52e87030ae-logs\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.268723 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.273457 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.273662 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.279458 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-scripts" (OuterVolumeSpecName: "scripts") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.286153 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c2aec1-96fe-498d-9638-7b3fa2347f26-kube-api-access-cr6gj" (OuterVolumeSpecName: "kube-api-access-cr6gj") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "kube-api-access-cr6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.327398 4823 scope.go:117] "RemoveContainer" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.370876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371204 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371250 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c47b64d8-740b-4759-98aa-9d52e87030ae-logs\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phjx\" (UniqueName: \"kubernetes.io/projected/c47b64d8-740b-4759-98aa-9d52e87030ae-kube-api-access-9phjx\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371388 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371663 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c47b64d8-740b-4759-98aa-9d52e87030ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371834 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr6gj\" (UniqueName: \"kubernetes.io/projected/67c2aec1-96fe-498d-9638-7b3fa2347f26-kube-api-access-cr6gj\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371872 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371887 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.371899 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67c2aec1-96fe-498d-9638-7b3fa2347f26-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.375382 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.375393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c47b64d8-740b-4759-98aa-9d52e87030ae-logs\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.377702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c47b64d8-740b-4759-98aa-9d52e87030ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.378168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.378575 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.381494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.383687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.390903 4823 scope.go:117] "RemoveContainer" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.397474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phjx\" (UniqueName: \"kubernetes.io/projected/c47b64d8-740b-4759-98aa-9d52e87030ae-kube-api-access-9phjx\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.398169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47b64d8-740b-4759-98aa-9d52e87030ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.414105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c47b64d8-740b-4759-98aa-9d52e87030ae\") " pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.447473 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.457379 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-config-data" (OuterVolumeSpecName: "config-data") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.459369 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67c2aec1-96fe-498d-9638-7b3fa2347f26" (UID: "67c2aec1-96fe-498d-9638-7b3fa2347f26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.462635 4823 scope.go:117] "RemoveContainer" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.468989 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": container with ID starting with 627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337 not found: ID does not exist" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.469036 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337"} err="failed to get container status \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": rpc error: code = NotFound desc = could not find container \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": container with ID starting with 627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.469061 4823 scope.go:117] "RemoveContainer" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.469565 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": container with ID starting with 6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49 not found: ID does not exist" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.469608 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49"} err="failed to get container status \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": rpc error: code = NotFound desc = could not find container \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": container with ID starting with 6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.469623 4823 scope.go:117] "RemoveContainer" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.470150 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": container with ID starting with 94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f not found: ID does not exist" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.470191 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f"} err="failed to get container status \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": rpc error: code = NotFound desc = could not find container \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": container with ID starting with 94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.470206 4823 scope.go:117] "RemoveContainer" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" Jan 21 17:38:24 crc kubenswrapper[4823]: E0121 17:38:24.470441 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": container with ID starting with 5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa not found: ID does not exist" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.470456 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa"} err="failed to get container status \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": rpc error: code = NotFound desc = could not find container \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": container with ID starting with 5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.470468 4823 scope.go:117] "RemoveContainer" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.471145 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337"} err="failed to get container status \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": rpc error: code = NotFound desc = could not find container \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": container with ID starting with 627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.471161 4823 scope.go:117] "RemoveContainer" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.471397 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49"} err="failed to get container status \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": rpc error: code = NotFound desc = could not find container \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": container with ID starting with 6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.471415 4823 scope.go:117] "RemoveContainer" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.479117 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.484406 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.484425 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2aec1-96fe-498d-9638-7b3fa2347f26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485005 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f"} err="failed to get container status \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": rpc error: code = NotFound desc = could not find container \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": container with ID starting with 94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485057 4823 scope.go:117] "RemoveContainer" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485334 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa"} err="failed to get container status \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": rpc error: code = NotFound desc = could not find container \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": container with ID starting with 5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485348 4823 scope.go:117] "RemoveContainer" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485526 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337"} err="failed to get container status \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": rpc error: code = NotFound desc = could not find container \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": container with ID starting with 627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485541 4823 scope.go:117] "RemoveContainer" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485695 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49"} err="failed to get container status \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": rpc error: code = NotFound desc = could not find container \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": container with ID starting with 6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485709 4823 scope.go:117] "RemoveContainer" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485921 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f"} err="failed to get container status \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": rpc error: code = NotFound desc = could not find container \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": container with ID starting with 94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.485940 4823 scope.go:117] "RemoveContainer" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486092 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa"} err="failed to get container status \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": rpc error: code = NotFound desc = could not find container \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": container with ID starting with 5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486104 4823 scope.go:117] "RemoveContainer" containerID="627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486250 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337"} err="failed to get container status \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": rpc error: code = NotFound desc = could not find container \"627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337\": container with ID starting with 627e07e59cdb38f37ff316c646afe0cb058df6580323bd1753410b40b5644337 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486267 4823 scope.go:117] "RemoveContainer" containerID="6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486429 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49"} err="failed to get container status \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": rpc error: code = NotFound desc = could not find container \"6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49\": container with ID starting with 6af593c74a03ed9c0e2571b0c4f3ba3e534cb9598468ef23a4f9172843d37d49 not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486445 4823 scope.go:117] "RemoveContainer" containerID="94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486609 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f"} err="failed to get container status \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": rpc error: code = NotFound desc = could not find container \"94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f\": container with ID starting with 94b0427d14704bbb4dbd3364479b69e817f04de8481031688fc6ec2f8677951f not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486625 4823 scope.go:117] "RemoveContainer" containerID="5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.486792 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa"} err="failed to get container status \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": rpc error: code = NotFound desc = could not find container \"5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa\": container with ID starting with 5382067d39cb5a5a8f3e0f8fc2f0d1d88cd2a082c6386c9e2964a694744494aa not found: ID does not exist" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.526516 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.585669 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data\") pod \"f1353050-fc17-4adc-827e-0eb14c17623d\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.585767 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-combined-ca-bundle\") pod \"f1353050-fc17-4adc-827e-0eb14c17623d\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.585908 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1353050-fc17-4adc-827e-0eb14c17623d-etc-machine-id\") pod \"f1353050-fc17-4adc-827e-0eb14c17623d\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.585969 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brknf\" (UniqueName: \"kubernetes.io/projected/f1353050-fc17-4adc-827e-0eb14c17623d-kube-api-access-brknf\") pod \"f1353050-fc17-4adc-827e-0eb14c17623d\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.586087 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-scripts\") pod \"f1353050-fc17-4adc-827e-0eb14c17623d\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.586128 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data-custom\") pod \"f1353050-fc17-4adc-827e-0eb14c17623d\" (UID: \"f1353050-fc17-4adc-827e-0eb14c17623d\") " Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.587696 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1353050-fc17-4adc-827e-0eb14c17623d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f1353050-fc17-4adc-827e-0eb14c17623d" (UID: "f1353050-fc17-4adc-827e-0eb14c17623d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.595369 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1353050-fc17-4adc-827e-0eb14c17623d-kube-api-access-brknf" (OuterVolumeSpecName: "kube-api-access-brknf") pod "f1353050-fc17-4adc-827e-0eb14c17623d" (UID: "f1353050-fc17-4adc-827e-0eb14c17623d"). InnerVolumeSpecName "kube-api-access-brknf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.595498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1353050-fc17-4adc-827e-0eb14c17623d" (UID: "f1353050-fc17-4adc-827e-0eb14c17623d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.608542 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-scripts" (OuterVolumeSpecName: "scripts") pod "f1353050-fc17-4adc-827e-0eb14c17623d" (UID: "f1353050-fc17-4adc-827e-0eb14c17623d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.665000 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1353050-fc17-4adc-827e-0eb14c17623d" (UID: "f1353050-fc17-4adc-827e-0eb14c17623d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.682089 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.682354 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-log" containerID="cri-o://a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf" gracePeriod=30 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.682895 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-httpd" containerID="cri-o://1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4" gracePeriod=30 Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.690539 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1353050-fc17-4adc-827e-0eb14c17623d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.690568 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brknf\" (UniqueName: \"kubernetes.io/projected/f1353050-fc17-4adc-827e-0eb14c17623d-kube-api-access-brknf\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.690578 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.690587 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.690595 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.820218 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data" (OuterVolumeSpecName: "config-data") pod "f1353050-fc17-4adc-827e-0eb14c17623d" (UID: "f1353050-fc17-4adc-827e-0eb14c17623d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:24 crc kubenswrapper[4823]: I0121 17:38:24.896149 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1353050-fc17-4adc-827e-0eb14c17623d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.020768 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.036816 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.045485 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.047897 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.102493 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqmxn\" (UniqueName: \"kubernetes.io/projected/45f9b916-5570-4624-822e-587591152bfe-kube-api-access-hqmxn\") pod \"45f9b916-5570-4624-822e-587591152bfe\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.102863 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373810fa-4b13-4036-99a4-3b1f4d02c0cf-operator-scripts\") pod \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.103010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35032d-b4a7-40e6-b249-ed6fda0b6917-operator-scripts\") pod \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.103036 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cbj2\" (UniqueName: \"kubernetes.io/projected/ad35032d-b4a7-40e6-b249-ed6fda0b6917-kube-api-access-8cbj2\") pod \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\" (UID: \"ad35032d-b4a7-40e6-b249-ed6fda0b6917\") " Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.103098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f9b916-5570-4624-822e-587591152bfe-operator-scripts\") pod \"45f9b916-5570-4624-822e-587591152bfe\" (UID: \"45f9b916-5570-4624-822e-587591152bfe\") " Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.103127 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdcmh\" (UniqueName: \"kubernetes.io/projected/373810fa-4b13-4036-99a4-3b1f4d02c0cf-kube-api-access-gdcmh\") pod \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\" (UID: \"373810fa-4b13-4036-99a4-3b1f4d02c0cf\") " Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.105602 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad35032d-b4a7-40e6-b249-ed6fda0b6917-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad35032d-b4a7-40e6-b249-ed6fda0b6917" (UID: "ad35032d-b4a7-40e6-b249-ed6fda0b6917"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.106154 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/373810fa-4b13-4036-99a4-3b1f4d02c0cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "373810fa-4b13-4036-99a4-3b1f4d02c0cf" (UID: "373810fa-4b13-4036-99a4-3b1f4d02c0cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.106604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45f9b916-5570-4624-822e-587591152bfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45f9b916-5570-4624-822e-587591152bfe" (UID: "45f9b916-5570-4624-822e-587591152bfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.113100 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f9b916-5570-4624-822e-587591152bfe-kube-api-access-hqmxn" (OuterVolumeSpecName: "kube-api-access-hqmxn") pod "45f9b916-5570-4624-822e-587591152bfe" (UID: "45f9b916-5570-4624-822e-587591152bfe"). InnerVolumeSpecName "kube-api-access-hqmxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.117622 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373810fa-4b13-4036-99a4-3b1f4d02c0cf-kube-api-access-gdcmh" (OuterVolumeSpecName: "kube-api-access-gdcmh") pod "373810fa-4b13-4036-99a4-3b1f4d02c0cf" (UID: "373810fa-4b13-4036-99a4-3b1f4d02c0cf"). InnerVolumeSpecName "kube-api-access-gdcmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.122399 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.129133 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad35032d-b4a7-40e6-b249-ed6fda0b6917-kube-api-access-8cbj2" (OuterVolumeSpecName: "kube-api-access-8cbj2") pod "ad35032d-b4a7-40e6-b249-ed6fda0b6917" (UID: "ad35032d-b4a7-40e6-b249-ed6fda0b6917"). InnerVolumeSpecName "kube-api-access-8cbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159056 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: E0121 17:38:25.159561 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373810fa-4b13-4036-99a4-3b1f4d02c0cf" containerName="mariadb-database-create" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159584 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="373810fa-4b13-4036-99a4-3b1f4d02c0cf" containerName="mariadb-database-create" Jan 21 17:38:25 crc kubenswrapper[4823]: E0121 17:38:25.159600 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad35032d-b4a7-40e6-b249-ed6fda0b6917" containerName="mariadb-account-create-update" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159610 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad35032d-b4a7-40e6-b249-ed6fda0b6917" containerName="mariadb-account-create-update" Jan 21 17:38:25 crc kubenswrapper[4823]: E0121 17:38:25.159620 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="probe" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159626 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="probe" Jan 21 17:38:25 crc kubenswrapper[4823]: E0121 17:38:25.159635 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f9b916-5570-4624-822e-587591152bfe" containerName="mariadb-database-create" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159641 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f9b916-5570-4624-822e-587591152bfe" containerName="mariadb-database-create" Jan 21 17:38:25 crc kubenswrapper[4823]: E0121 17:38:25.159654 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="cinder-scheduler" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159661 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="cinder-scheduler" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159889 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="cinder-scheduler" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159903 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f9b916-5570-4624-822e-587591152bfe" containerName="mariadb-database-create" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159915 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="373810fa-4b13-4036-99a4-3b1f4d02c0cf" containerName="mariadb-database-create" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159933 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad35032d-b4a7-40e6-b249-ed6fda0b6917" containerName="mariadb-account-create-update" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.159945 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" containerName="probe" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.171361 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.175589 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.175878 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.183381 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1353050-fc17-4adc-827e-0eb14c17623d","Type":"ContainerDied","Data":"9bac02db1b02ef95ffe85f72952528681757f459540c6cee71760231b20f36f7"} Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.183433 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.183462 4823 scope.go:117] "RemoveContainer" containerID="85091161a8f4867eb1b3bfcda5a21c3f4e6ca2039f97ca412b7d74b0c73a12f6" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.203695 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2a23-account-create-update-vbrvj" event={"ID":"ad35032d-b4a7-40e6-b249-ed6fda0b6917","Type":"ContainerDied","Data":"bc65c5b011b923ffa392b516f4ef847faeca2f6b04d95407cdc5bb87fa4b84ac"} Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.203730 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a23-account-create-update-vbrvj" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.203739 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc65c5b011b923ffa392b516f4ef847faeca2f6b04d95407cdc5bb87fa4b84ac" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-config-data\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205468 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-run-httpd\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205486 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-scripts\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-log-httpd\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205569 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205631 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbfqr\" (UniqueName: \"kubernetes.io/projected/99503901-5bf8-428b-b835-76ef4a876036-kube-api-access-tbfqr\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205687 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35032d-b4a7-40e6-b249-ed6fda0b6917-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205702 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cbj2\" (UniqueName: \"kubernetes.io/projected/ad35032d-b4a7-40e6-b249-ed6fda0b6917-kube-api-access-8cbj2\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205713 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f9b916-5570-4624-822e-587591152bfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205722 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdcmh\" (UniqueName: \"kubernetes.io/projected/373810fa-4b13-4036-99a4-3b1f4d02c0cf-kube-api-access-gdcmh\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205730 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqmxn\" (UniqueName: \"kubernetes.io/projected/45f9b916-5570-4624-822e-587591152bfe-kube-api-access-hqmxn\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.205740 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373810fa-4b13-4036-99a4-3b1f4d02c0cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.226285 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.252553 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kbsvz" event={"ID":"45f9b916-5570-4624-822e-587591152bfe","Type":"ContainerDied","Data":"7518c036ff7d10961973679cb063759c8a34e976a9efedb941c4bfa2532b3b2e"} Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.252600 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7518c036ff7d10961973679cb063759c8a34e976a9efedb941c4bfa2532b3b2e" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.252680 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kbsvz" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.266329 4823 scope.go:117] "RemoveContainer" containerID="882d101df45f194da70c92d46a175496a3dd15b8cd33a4ae44f0054ed2f5b48e" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.267575 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.282658 4823 generic.go:334] "Generic (PLEG): container finished" podID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerID="a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf" exitCode=143 Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.282722 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21eb6cab-7de3-4826-9d12-33a1b7e13a13","Type":"ContainerDied","Data":"a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf"} Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.306988 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.307089 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbfqr\" (UniqueName: \"kubernetes.io/projected/99503901-5bf8-428b-b835-76ef4a876036-kube-api-access-tbfqr\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.307165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-config-data\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.307181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.307227 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-run-httpd\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.307244 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-scripts\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.307260 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-log-httpd\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.308107 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-log-httpd\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.308479 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-run-httpd\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.310328 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vsz94" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.311032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vsz94" event={"ID":"373810fa-4b13-4036-99a4-3b1f4d02c0cf","Type":"ContainerDied","Data":"556959d3cd004ca6a137ea0977f1ae19772fa46df17e70bd1b36846bb481c369"} Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.311062 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556959d3cd004ca6a137ea0977f1ae19772fa46df17e70bd1b36846bb481c369" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.326741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.331316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-config-data\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.339139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-scripts\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.349343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbfqr\" (UniqueName: \"kubernetes.io/projected/99503901-5bf8-428b-b835-76ef4a876036-kube-api-access-tbfqr\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.365798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.521815 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c2aec1-96fe-498d-9638-7b3fa2347f26" path="/var/lib/kubelet/pods/67c2aec1-96fe-498d-9638-7b3fa2347f26/volumes" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.523485 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3f6704-aa00-4387-9410-564e0cf95d93" path="/var/lib/kubelet/pods/9f3f6704-aa00-4387-9410-564e0cf95d93/volumes" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.524276 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.524300 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.524315 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.525817 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.525918 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.529703 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.531924 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.618038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.618105 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.618158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtc7\" (UniqueName: \"kubernetes.io/projected/a1799c60-9bb6-473f-a01f-490dfb36b396-kube-api-access-5jtc7\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.618462 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.618642 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.618819 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1799c60-9bb6-473f-a01f-490dfb36b396-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.721738 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.722137 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.722287 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1799c60-9bb6-473f-a01f-490dfb36b396-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.722454 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.722562 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.722700 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtc7\" (UniqueName: \"kubernetes.io/projected/a1799c60-9bb6-473f-a01f-490dfb36b396-kube-api-access-5jtc7\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.724158 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1799c60-9bb6-473f-a01f-490dfb36b396-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.729350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.733118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.734303 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.736466 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1799c60-9bb6-473f-a01f-490dfb36b396-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.751777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtc7\" (UniqueName: \"kubernetes.io/projected/a1799c60-9bb6-473f-a01f-490dfb36b396-kube-api-access-5jtc7\") pod \"cinder-scheduler-0\" (UID: \"a1799c60-9bb6-473f-a01f-490dfb36b396\") " pod="openstack/cinder-scheduler-0" Jan 21 17:38:25 crc kubenswrapper[4823]: I0121 17:38:25.867339 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.123300 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.150942 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.158875 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.240104 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb1376bd-dda3-4f8a-93df-1c699366f12e-operator-scripts\") pod \"fb1376bd-dda3-4f8a-93df-1c699366f12e\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.240298 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz57h\" (UniqueName: \"kubernetes.io/projected/72025c1d-b829-47a6-90c0-9be0c98110cb-kube-api-access-zz57h\") pod \"72025c1d-b829-47a6-90c0-9be0c98110cb\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.240391 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e444ec-f927-4a23-8437-3e3b06ab3498-operator-scripts\") pod \"d5e444ec-f927-4a23-8437-3e3b06ab3498\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.240448 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72025c1d-b829-47a6-90c0-9be0c98110cb-operator-scripts\") pod \"72025c1d-b829-47a6-90c0-9be0c98110cb\" (UID: \"72025c1d-b829-47a6-90c0-9be0c98110cb\") " Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.240547 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8bzr\" (UniqueName: \"kubernetes.io/projected/d5e444ec-f927-4a23-8437-3e3b06ab3498-kube-api-access-z8bzr\") pod \"d5e444ec-f927-4a23-8437-3e3b06ab3498\" (UID: \"d5e444ec-f927-4a23-8437-3e3b06ab3498\") " Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.241804 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k42fn\" (UniqueName: \"kubernetes.io/projected/fb1376bd-dda3-4f8a-93df-1c699366f12e-kube-api-access-k42fn\") pod \"fb1376bd-dda3-4f8a-93df-1c699366f12e\" (UID: \"fb1376bd-dda3-4f8a-93df-1c699366f12e\") " Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.245818 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1376bd-dda3-4f8a-93df-1c699366f12e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb1376bd-dda3-4f8a-93df-1c699366f12e" (UID: "fb1376bd-dda3-4f8a-93df-1c699366f12e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.246234 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72025c1d-b829-47a6-90c0-9be0c98110cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72025c1d-b829-47a6-90c0-9be0c98110cb" (UID: "72025c1d-b829-47a6-90c0-9be0c98110cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.246362 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e444ec-f927-4a23-8437-3e3b06ab3498-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5e444ec-f927-4a23-8437-3e3b06ab3498" (UID: "d5e444ec-f927-4a23-8437-3e3b06ab3498"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.251764 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e444ec-f927-4a23-8437-3e3b06ab3498-kube-api-access-z8bzr" (OuterVolumeSpecName: "kube-api-access-z8bzr") pod "d5e444ec-f927-4a23-8437-3e3b06ab3498" (UID: "d5e444ec-f927-4a23-8437-3e3b06ab3498"). InnerVolumeSpecName "kube-api-access-z8bzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.253086 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1376bd-dda3-4f8a-93df-1c699366f12e-kube-api-access-k42fn" (OuterVolumeSpecName: "kube-api-access-k42fn") pod "fb1376bd-dda3-4f8a-93df-1c699366f12e" (UID: "fb1376bd-dda3-4f8a-93df-1c699366f12e"). InnerVolumeSpecName "kube-api-access-k42fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.254361 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72025c1d-b829-47a6-90c0-9be0c98110cb-kube-api-access-zz57h" (OuterVolumeSpecName: "kube-api-access-zz57h") pod "72025c1d-b829-47a6-90c0-9be0c98110cb" (UID: "72025c1d-b829-47a6-90c0-9be0c98110cb"). InnerVolumeSpecName "kube-api-access-zz57h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.339114 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c47b64d8-740b-4759-98aa-9d52e87030ae","Type":"ContainerStarted","Data":"17f4f6ba6699de4473e9152b8883eecda8cab49a5f72747019f90e44401d63b3"} Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.341530 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.345102 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k42fn\" (UniqueName: \"kubernetes.io/projected/fb1376bd-dda3-4f8a-93df-1c699366f12e-kube-api-access-k42fn\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.345125 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb1376bd-dda3-4f8a-93df-1c699366f12e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.345134 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz57h\" (UniqueName: \"kubernetes.io/projected/72025c1d-b829-47a6-90c0-9be0c98110cb-kube-api-access-zz57h\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.345143 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e444ec-f927-4a23-8437-3e3b06ab3498-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.345150 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72025c1d-b829-47a6-90c0-9be0c98110cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.345158 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8bzr\" (UniqueName: \"kubernetes.io/projected/d5e444ec-f927-4a23-8437-3e3b06ab3498-kube-api-access-z8bzr\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.346720 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxptb" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.346749 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxptb" event={"ID":"d5e444ec-f927-4a23-8437-3e3b06ab3498","Type":"ContainerDied","Data":"6e1471bdf2cfa116870ca1d4c667c111097abde5eab8b388103b2fbb41efa81b"} Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.346790 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1471bdf2cfa116870ca1d4c667c111097abde5eab8b388103b2fbb41efa81b" Jan 21 17:38:26 crc kubenswrapper[4823]: W0121 17:38:26.346813 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99503901_5bf8_428b_b835_76ef4a876036.slice/crio-fdb0bac4269c3b7e864efbabd83e6a54c27d56e82828fa298d48be5120466e94 WatchSource:0}: Error finding container fdb0bac4269c3b7e864efbabd83e6a54c27d56e82828fa298d48be5120466e94: Status 404 returned error can't find the container with id fdb0bac4269c3b7e864efbabd83e6a54c27d56e82828fa298d48be5120466e94 Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.348318 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.348407 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c8d-account-create-update-2cn9n" event={"ID":"fb1376bd-dda3-4f8a-93df-1c699366f12e","Type":"ContainerDied","Data":"b55083fda110fd2a6f2b4727124992f70cbaaa64cff481dfad52ec7ba68c95ec"} Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.348438 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b55083fda110fd2a6f2b4727124992f70cbaaa64cff481dfad52ec7ba68c95ec" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.371326 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" event={"ID":"72025c1d-b829-47a6-90c0-9be0c98110cb","Type":"ContainerDied","Data":"a314f28817268141840b8338f58c5bb08bf60a44fee21e503fa91a2db6abfb6f"} Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.371368 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a314f28817268141840b8338f58c5bb08bf60a44fee21e503fa91a2db6abfb6f" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.371494 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d17a-account-create-update-pvshz" Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.619593 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 17:38:26 crc kubenswrapper[4823]: I0121 17:38:26.918776 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.363143 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1353050-fc17-4adc-827e-0eb14c17623d" path="/var/lib/kubelet/pods/f1353050-fc17-4adc-827e-0eb14c17623d/volumes" Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.387233 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1799c60-9bb6-473f-a01f-490dfb36b396","Type":"ContainerStarted","Data":"6aa526f5ace5ef2b5ca7c4e1ec5090c837d0831062ac70ac0827f7dcb4078811"} Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.387290 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1799c60-9bb6-473f-a01f-490dfb36b396","Type":"ContainerStarted","Data":"117a193fd282be4c34fae0ac7c9ab3c5c76e77cfeaef31131252ea891b5860c4"} Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.390019 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c47b64d8-740b-4759-98aa-9d52e87030ae","Type":"ContainerStarted","Data":"2d401b10a300ba8af0e2c24f602f7b5e3c71430725b7f05c95a0573c418d1d30"} Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.390068 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c47b64d8-740b-4759-98aa-9d52e87030ae","Type":"ContainerStarted","Data":"23a1148c892e95cbf2a90a0283a2d299f548bcda3112f1a2173b64ae1ab56de1"} Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.394460 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerStarted","Data":"c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451"} Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.394508 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerStarted","Data":"fdb0bac4269c3b7e864efbabd83e6a54c27d56e82828fa298d48be5120466e94"} Jan 21 17:38:27 crc kubenswrapper[4823]: I0121 17:38:27.420399 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.420375954 podStartE2EDuration="3.420375954s" podCreationTimestamp="2026-01-21 17:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:27.410294146 +0000 UTC m=+1308.336425006" watchObservedRunningTime="2026-01-21 17:38:27.420375954 +0000 UTC m=+1308.346506814" Jan 21 17:38:28 crc kubenswrapper[4823]: I0121 17:38:28.417206 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerStarted","Data":"dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4"} Jan 21 17:38:28 crc kubenswrapper[4823]: I0121 17:38:28.432077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1799c60-9bb6-473f-a01f-490dfb36b396","Type":"ContainerStarted","Data":"16d622582fee03d182186455716921a3f44e31cf369f4041b4b90e44efd2b30e"} Jan 21 17:38:28 crc kubenswrapper[4823]: I0121 17:38:28.465928 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.465903822 podStartE2EDuration="3.465903822s" podCreationTimestamp="2026-01-21 17:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:28.462639792 +0000 UTC m=+1309.388770672" watchObservedRunningTime="2026-01-21 17:38:28.465903822 +0000 UTC m=+1309.392034702" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.053093 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.216494 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-logs\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.216772 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.216829 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-httpd-run\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.216873 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-scripts\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.216941 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-config-data\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.217010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6fc7\" (UniqueName: \"kubernetes.io/projected/21eb6cab-7de3-4826-9d12-33a1b7e13a13-kube-api-access-b6fc7\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.217139 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-internal-tls-certs\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.217159 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-combined-ca-bundle\") pod \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\" (UID: \"21eb6cab-7de3-4826-9d12-33a1b7e13a13\") " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.228385 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-scripts" (OuterVolumeSpecName: "scripts") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.228817 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-logs" (OuterVolumeSpecName: "logs") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.230199 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.241163 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.268137 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21eb6cab-7de3-4826-9d12-33a1b7e13a13-kube-api-access-b6fc7" (OuterVolumeSpecName: "kube-api-access-b6fc7") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "kube-api-access-b6fc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.313312 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.321677 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.321711 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.321735 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.321744 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21eb6cab-7de3-4826-9d12-33a1b7e13a13-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.321753 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.321761 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6fc7\" (UniqueName: \"kubernetes.io/projected/21eb6cab-7de3-4826-9d12-33a1b7e13a13-kube-api-access-b6fc7\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.422246 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.428074 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.433774 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.433825 4823 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.437077 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-config-data" (OuterVolumeSpecName: "config-data") pod "21eb6cab-7de3-4826-9d12-33a1b7e13a13" (UID: "21eb6cab-7de3-4826-9d12-33a1b7e13a13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.484828 4823 generic.go:334] "Generic (PLEG): container finished" podID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerID="1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4" exitCode=0 Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.484940 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21eb6cab-7de3-4826-9d12-33a1b7e13a13","Type":"ContainerDied","Data":"1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4"} Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.484985 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21eb6cab-7de3-4826-9d12-33a1b7e13a13","Type":"ContainerDied","Data":"8668939d7b2fba38c241250746c4460836a5452a094892dd07ca43e3cf4ced36"} Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.485010 4823 scope.go:117] "RemoveContainer" containerID="1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.485130 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.508425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerStarted","Data":"a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a"} Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.523485 4823 scope.go:117] "RemoveContainer" containerID="a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.536143 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eb6cab-7de3-4826-9d12-33a1b7e13a13-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.545805 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.581920 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.582131 4823 scope.go:117] "RemoveContainer" containerID="1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4" Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.588014 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4\": container with ID starting with 1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4 not found: ID does not exist" containerID="1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.588063 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4"} err="failed to get container status \"1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4\": rpc error: code = NotFound desc = could not find container \"1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4\": container with ID starting with 1a424ce7b94c3ea317784d08e774beccae8f1ac2e37f1ce77949525add21d6d4 not found: ID does not exist" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.588091 4823 scope.go:117] "RemoveContainer" containerID="a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.591816 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.592290 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-log" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592303 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-log" Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.592316 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1376bd-dda3-4f8a-93df-1c699366f12e" containerName="mariadb-account-create-update" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592322 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1376bd-dda3-4f8a-93df-1c699366f12e" containerName="mariadb-account-create-update" Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.592351 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-httpd" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592357 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-httpd" Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.592369 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e444ec-f927-4a23-8437-3e3b06ab3498" containerName="mariadb-database-create" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592375 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e444ec-f927-4a23-8437-3e3b06ab3498" containerName="mariadb-database-create" Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.592388 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72025c1d-b829-47a6-90c0-9be0c98110cb" containerName="mariadb-account-create-update" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592393 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="72025c1d-b829-47a6-90c0-9be0c98110cb" containerName="mariadb-account-create-update" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592567 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="72025c1d-b829-47a6-90c0-9be0c98110cb" containerName="mariadb-account-create-update" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592581 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e444ec-f927-4a23-8437-3e3b06ab3498" containerName="mariadb-database-create" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592595 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-log" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592605 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1376bd-dda3-4f8a-93df-1c699366f12e" containerName="mariadb-account-create-update" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.592619 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" containerName="glance-httpd" Jan 21 17:38:29 crc kubenswrapper[4823]: E0121 17:38:29.593009 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf\": container with ID starting with a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf not found: ID does not exist" containerID="a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.593034 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf"} err="failed to get container status \"a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf\": rpc error: code = NotFound desc = could not find container \"a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf\": container with ID starting with a14b6ba5898e4268b525f7d96c6fede8ec942673dbfb814b88149462318218bf not found: ID does not exist" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.593667 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.597741 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.597929 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.629762 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.740587 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb937adb-bff8-4b3c-950f-cc5dddd41b95-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.740685 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.740745 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.740785 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.740867 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.740930 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.741038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb937adb-bff8-4b3c-950f-cc5dddd41b95-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.741100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xwx\" (UniqueName: \"kubernetes.io/projected/cb937adb-bff8-4b3c-950f-cc5dddd41b95-kube-api-access-g4xwx\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.843838 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.843930 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.843957 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.843997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.844038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.844059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb937adb-bff8-4b3c-950f-cc5dddd41b95-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.844077 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xwx\" (UniqueName: \"kubernetes.io/projected/cb937adb-bff8-4b3c-950f-cc5dddd41b95-kube-api-access-g4xwx\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.844127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb937adb-bff8-4b3c-950f-cc5dddd41b95-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.844526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb937adb-bff8-4b3c-950f-cc5dddd41b95-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.844740 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.850658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb937adb-bff8-4b3c-950f-cc5dddd41b95-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.854882 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.859724 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.859924 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.860764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb937adb-bff8-4b3c-950f-cc5dddd41b95-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.870247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xwx\" (UniqueName: \"kubernetes.io/projected/cb937adb-bff8-4b3c-950f-cc5dddd41b95-kube-api-access-g4xwx\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.878750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb937adb-bff8-4b3c-950f-cc5dddd41b95\") " pod="openstack/glance-default-internal-api-0" Jan 21 17:38:29 crc kubenswrapper[4823]: I0121 17:38:29.932171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:30 crc kubenswrapper[4823]: I0121 17:38:30.426897 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:38:30 crc kubenswrapper[4823]: I0121 17:38:30.533685 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68db9cf4b4-kzfgq" Jan 21 17:38:30 crc kubenswrapper[4823]: I0121 17:38:30.776448 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 17:38:30 crc kubenswrapper[4823]: I0121 17:38:30.869973 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.193787 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qs8v9"] Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.195587 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.202334 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftrqd" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.202633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.202779 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.214906 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qs8v9"] Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.295265 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-config-data\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.295323 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9bf\" (UniqueName: \"kubernetes.io/projected/223be500-ec7c-4380-9001-8f80d0f799f2-kube-api-access-9b9bf\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.295394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.295427 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-scripts\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.359626 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21eb6cab-7de3-4826-9d12-33a1b7e13a13" path="/var/lib/kubelet/pods/21eb6cab-7de3-4826-9d12-33a1b7e13a13/volumes" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.397487 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-config-data\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.397555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9bf\" (UniqueName: \"kubernetes.io/projected/223be500-ec7c-4380-9001-8f80d0f799f2-kube-api-access-9b9bf\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.397644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.397688 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-scripts\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.410588 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-scripts\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.415047 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.422820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-config-data\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.424292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9bf\" (UniqueName: \"kubernetes.io/projected/223be500-ec7c-4380-9001-8f80d0f799f2-kube-api-access-9b9bf\") pod \"nova-cell0-conductor-db-sync-qs8v9\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.425379 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.544556 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.546128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb937adb-bff8-4b3c-950f-cc5dddd41b95","Type":"ContainerStarted","Data":"7c6023285c1578a024bb4f511f99e1ddc36ca7c54aa00ac9f487b3d211f44f7a"} Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.554269 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerStarted","Data":"3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a"} Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.554481 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-central-agent" containerID="cri-o://c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451" gracePeriod=30 Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.554738 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.555082 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="proxy-httpd" containerID="cri-o://3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a" gracePeriod=30 Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.555134 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="sg-core" containerID="cri-o://a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a" gracePeriod=30 Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.555165 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-notification-agent" containerID="cri-o://dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4" gracePeriod=30 Jan 21 17:38:31 crc kubenswrapper[4823]: I0121 17:38:31.588403 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.323725098 podStartE2EDuration="6.58837732s" podCreationTimestamp="2026-01-21 17:38:25 +0000 UTC" firstStartedPulling="2026-01-21 17:38:26.359771063 +0000 UTC m=+1307.285901923" lastFinishedPulling="2026-01-21 17:38:30.624423285 +0000 UTC m=+1311.550554145" observedRunningTime="2026-01-21 17:38:31.588206926 +0000 UTC m=+1312.514337806" watchObservedRunningTime="2026-01-21 17:38:31.58837732 +0000 UTC m=+1312.514508180" Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.050318 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qs8v9"] Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.572952 4823 generic.go:334] "Generic (PLEG): container finished" podID="99503901-5bf8-428b-b835-76ef4a876036" containerID="3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a" exitCode=0 Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.573277 4823 generic.go:334] "Generic (PLEG): container finished" podID="99503901-5bf8-428b-b835-76ef4a876036" containerID="a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a" exitCode=2 Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.573292 4823 generic.go:334] "Generic (PLEG): container finished" podID="99503901-5bf8-428b-b835-76ef4a876036" containerID="dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4" exitCode=0 Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.573142 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerDied","Data":"3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a"} Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.573361 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerDied","Data":"a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a"} Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.573374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerDied","Data":"dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4"} Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.578849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb937adb-bff8-4b3c-950f-cc5dddd41b95","Type":"ContainerStarted","Data":"2c595772bf2903f1469b5345ad7e74b46ab10306a63bd974baa771a79afee7f2"} Jan 21 17:38:32 crc kubenswrapper[4823]: I0121 17:38:32.580230 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" event={"ID":"223be500-ec7c-4380-9001-8f80d0f799f2","Type":"ContainerStarted","Data":"6edb5c32e14273a50ec2b5790c0f6c570d1388f0f2cc2f8be4c9e6d2bf752e0f"} Jan 21 17:38:33 crc kubenswrapper[4823]: I0121 17:38:33.596232 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb937adb-bff8-4b3c-950f-cc5dddd41b95","Type":"ContainerStarted","Data":"d8a6f4405e47b1a3649692e0158874e0c76b56a031dad2fd87471c0dfdcc6f2f"} Jan 21 17:38:33 crc kubenswrapper[4823]: I0121 17:38:33.630264 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.630243814 podStartE2EDuration="4.630243814s" podCreationTimestamp="2026-01-21 17:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:38:33.616818663 +0000 UTC m=+1314.542949533" watchObservedRunningTime="2026-01-21 17:38:33.630243814 +0000 UTC m=+1314.556374664" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.526723 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.527125 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.527165 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.578522 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.623725 4823 generic.go:334] "Generic (PLEG): container finished" podID="99503901-5bf8-428b-b835-76ef4a876036" containerID="c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451" exitCode=0 Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.624385 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerDied","Data":"c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451"} Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.624434 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99503901-5bf8-428b-b835-76ef4a876036","Type":"ContainerDied","Data":"fdb0bac4269c3b7e864efbabd83e6a54c27d56e82828fa298d48be5120466e94"} Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.624469 4823 scope.go:117] "RemoveContainer" containerID="3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.624675 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.624774 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.624865 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.678846 4823 scope.go:117] "RemoveContainer" containerID="a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.698649 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-combined-ca-bundle\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.701859 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-log-httpd\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.701966 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-run-httpd\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.702018 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-scripts\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.702104 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-config-data\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.702243 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbfqr\" (UniqueName: \"kubernetes.io/projected/99503901-5bf8-428b-b835-76ef4a876036-kube-api-access-tbfqr\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.702267 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-sg-core-conf-yaml\") pod \"99503901-5bf8-428b-b835-76ef4a876036\" (UID: \"99503901-5bf8-428b-b835-76ef4a876036\") " Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.705236 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.705558 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.709317 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-scripts" (OuterVolumeSpecName: "scripts") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.711701 4823 scope.go:117] "RemoveContainer" containerID="dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.714950 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99503901-5bf8-428b-b835-76ef4a876036-kube-api-access-tbfqr" (OuterVolumeSpecName: "kube-api-access-tbfqr") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "kube-api-access-tbfqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.767951 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.806755 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.807191 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.807206 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbfqr\" (UniqueName: \"kubernetes.io/projected/99503901-5bf8-428b-b835-76ef4a876036-kube-api-access-tbfqr\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.807219 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.807229 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99503901-5bf8-428b-b835-76ef4a876036-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.837157 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.852574 4823 scope.go:117] "RemoveContainer" containerID="c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.869041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-config-data" (OuterVolumeSpecName: "config-data") pod "99503901-5bf8-428b-b835-76ef4a876036" (UID: "99503901-5bf8-428b-b835-76ef4a876036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.886736 4823 scope.go:117] "RemoveContainer" containerID="3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.888046 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a\": container with ID starting with 3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a not found: ID does not exist" containerID="3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.888084 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a"} err="failed to get container status \"3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a\": rpc error: code = NotFound desc = could not find container \"3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a\": container with ID starting with 3d1f4d0b4bdae2970b39d155e3c2458f079afeaf5a3964798acaafd354f1fa4a not found: ID does not exist" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.888108 4823 scope.go:117] "RemoveContainer" containerID="a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.889143 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a\": container with ID starting with a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a not found: ID does not exist" containerID="a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.889172 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a"} err="failed to get container status \"a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a\": rpc error: code = NotFound desc = could not find container \"a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a\": container with ID starting with a7eb21bdb0d038cca6ddf15f52ee4fb99999530fe629701a7f56923fa387c39a not found: ID does not exist" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.889189 4823 scope.go:117] "RemoveContainer" containerID="dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.889454 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4\": container with ID starting with dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4 not found: ID does not exist" containerID="dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.889478 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4"} err="failed to get container status \"dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4\": rpc error: code = NotFound desc = could not find container \"dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4\": container with ID starting with dd2a4c4247380f16781f78e3a79cfd53a5b2bdcc621f3f47109743351eda58f4 not found: ID does not exist" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.889492 4823 scope.go:117] "RemoveContainer" containerID="c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.889702 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451\": container with ID starting with c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451 not found: ID does not exist" containerID="c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.889724 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451"} err="failed to get container status \"c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451\": rpc error: code = NotFound desc = could not find container \"c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451\": container with ID starting with c3afb27cc2482fcf52f18dbda59b3f543bb755c9b8e8050ba95fc0f2df4cd451 not found: ID does not exist" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.909298 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.909332 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99503901-5bf8-428b-b835-76ef4a876036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.964460 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.976311 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.995447 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.996038 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-notification-agent" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996064 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-notification-agent" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.996089 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-central-agent" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996099 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-central-agent" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.996127 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="sg-core" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996136 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="sg-core" Jan 21 17:38:34 crc kubenswrapper[4823]: E0121 17:38:34.996162 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="proxy-httpd" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996169 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="proxy-httpd" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996427 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-central-agent" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996443 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="proxy-httpd" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996464 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="ceilometer-notification-agent" Jan 21 17:38:34 crc kubenswrapper[4823]: I0121 17:38:34.996478 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="99503901-5bf8-428b-b835-76ef4a876036" containerName="sg-core" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:34.999317 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.002007 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.003649 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.014485 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.113846 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-run-httpd\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.113928 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-scripts\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.113992 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-config-data\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.114020 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.114197 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.114375 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-log-httpd\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.114603 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxg9\" (UniqueName: \"kubernetes.io/projected/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-kube-api-access-ggxg9\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217163 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-run-httpd\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217209 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-scripts\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-config-data\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217306 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217341 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-log-httpd\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.217431 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxg9\" (UniqueName: \"kubernetes.io/projected/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-kube-api-access-ggxg9\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.218351 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-run-httpd\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.221316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-log-httpd\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.225798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.227179 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.230537 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-scripts\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.251362 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-config-data\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.259725 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxg9\" (UniqueName: \"kubernetes.io/projected/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-kube-api-access-ggxg9\") pod \"ceilometer-0\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.363238 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99503901-5bf8-428b-b835-76ef4a876036" path="/var/lib/kubelet/pods/99503901-5bf8-428b-b835-76ef4a876036/volumes" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.369859 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.636766 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 17:38:35 crc kubenswrapper[4823]: I0121 17:38:35.913252 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:36 crc kubenswrapper[4823]: I0121 17:38:36.146321 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 17:38:36 crc kubenswrapper[4823]: I0121 17:38:36.648832 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:38:36 crc kubenswrapper[4823]: I0121 17:38:36.650077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerStarted","Data":"c62c6a515dd69852cb860d50fb00d762452d42b8bb7586df354b450a8cabbae6"} Jan 21 17:38:36 crc kubenswrapper[4823]: I0121 17:38:36.946478 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 17:38:37 crc kubenswrapper[4823]: I0121 17:38:37.663830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerStarted","Data":"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03"} Jan 21 17:38:37 crc kubenswrapper[4823]: I0121 17:38:37.663919 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:38:37 crc kubenswrapper[4823]: I0121 17:38:37.770172 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 17:38:38 crc kubenswrapper[4823]: I0121 17:38:38.547410 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:39 crc kubenswrapper[4823]: I0121 17:38:39.932939 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:39 crc kubenswrapper[4823]: I0121 17:38:39.933199 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:39 crc kubenswrapper[4823]: I0121 17:38:39.967782 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:39 crc kubenswrapper[4823]: I0121 17:38:39.992760 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:40 crc kubenswrapper[4823]: I0121 17:38:40.703621 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:40 crc kubenswrapper[4823]: I0121 17:38:40.703955 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:42 crc kubenswrapper[4823]: I0121 17:38:42.958052 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:42 crc kubenswrapper[4823]: I0121 17:38:42.958492 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 17:38:42 crc kubenswrapper[4823]: I0121 17:38:42.959094 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 17:38:43 crc kubenswrapper[4823]: I0121 17:38:43.743921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" event={"ID":"223be500-ec7c-4380-9001-8f80d0f799f2","Type":"ContainerStarted","Data":"802c97b99c54a1d566f9b886c15ff1806c668f1ee2cb9c2a22b1305cbf79980c"} Jan 21 17:38:43 crc kubenswrapper[4823]: I0121 17:38:43.747169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerStarted","Data":"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58"} Jan 21 17:38:43 crc kubenswrapper[4823]: I0121 17:38:43.769153 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" podStartSLOduration=1.66961001 podStartE2EDuration="12.769126194s" podCreationTimestamp="2026-01-21 17:38:31 +0000 UTC" firstStartedPulling="2026-01-21 17:38:32.101265966 +0000 UTC m=+1313.027396826" lastFinishedPulling="2026-01-21 17:38:43.20078215 +0000 UTC m=+1324.126913010" observedRunningTime="2026-01-21 17:38:43.767387961 +0000 UTC m=+1324.693518831" watchObservedRunningTime="2026-01-21 17:38:43.769126194 +0000 UTC m=+1324.695257054" Jan 21 17:38:44 crc kubenswrapper[4823]: I0121 17:38:44.761983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerStarted","Data":"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41"} Jan 21 17:38:45 crc kubenswrapper[4823]: I0121 17:38:45.071047 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:38:45 crc kubenswrapper[4823]: I0121 17:38:45.071145 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.795207 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerStarted","Data":"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5"} Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.795820 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.795434 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-notification-agent" containerID="cri-o://042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" gracePeriod=30 Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.795384 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-central-agent" containerID="cri-o://99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" gracePeriod=30 Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.795458 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="proxy-httpd" containerID="cri-o://c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" gracePeriod=30 Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.795476 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="sg-core" containerID="cri-o://aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" gracePeriod=30 Jan 21 17:38:47 crc kubenswrapper[4823]: I0121 17:38:47.824499 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.344235703 podStartE2EDuration="13.824477831s" podCreationTimestamp="2026-01-21 17:38:34 +0000 UTC" firstStartedPulling="2026-01-21 17:38:35.926732021 +0000 UTC m=+1316.852862881" lastFinishedPulling="2026-01-21 17:38:47.406974139 +0000 UTC m=+1328.333105009" observedRunningTime="2026-01-21 17:38:47.817293944 +0000 UTC m=+1328.743424804" watchObservedRunningTime="2026-01-21 17:38:47.824477831 +0000 UTC m=+1328.750608691" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.536216 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634136 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-config-data\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634208 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-log-httpd\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634305 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-combined-ca-bundle\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634327 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-run-httpd\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634422 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-scripts\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxg9\" (UniqueName: \"kubernetes.io/projected/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-kube-api-access-ggxg9\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-sg-core-conf-yaml\") pod \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\" (UID: \"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e\") " Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634933 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.634957 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.635544 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.635564 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.639698 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-kube-api-access-ggxg9" (OuterVolumeSpecName: "kube-api-access-ggxg9") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "kube-api-access-ggxg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.640279 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-scripts" (OuterVolumeSpecName: "scripts") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.667204 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.736685 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.737188 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxg9\" (UniqueName: \"kubernetes.io/projected/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-kube-api-access-ggxg9\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.737219 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.737231 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.737242 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.765983 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-config-data" (OuterVolumeSpecName: "config-data") pod "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" (UID: "8560d5dd-20f2-49c0-8a9e-34e6aef87a5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.808938 4823 generic.go:334] "Generic (PLEG): container finished" podID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" exitCode=0 Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.808965 4823 generic.go:334] "Generic (PLEG): container finished" podID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" exitCode=2 Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.808974 4823 generic.go:334] "Generic (PLEG): container finished" podID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" exitCode=0 Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.808984 4823 generic.go:334] "Generic (PLEG): container finished" podID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" exitCode=0 Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.808987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerDied","Data":"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5"} Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.809081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerDied","Data":"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41"} Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.809096 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerDied","Data":"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58"} Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.809109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerDied","Data":"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03"} Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.809122 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8560d5dd-20f2-49c0-8a9e-34e6aef87a5e","Type":"ContainerDied","Data":"c62c6a515dd69852cb860d50fb00d762452d42b8bb7586df354b450a8cabbae6"} Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.809148 4823 scope.go:117] "RemoveContainer" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.810325 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.829407 4823 scope.go:117] "RemoveContainer" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.842929 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.857203 4823 scope.go:117] "RemoveContainer" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.858492 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.870793 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.879516 4823 scope.go:117] "RemoveContainer" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.884829 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.885389 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-central-agent" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.885542 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-central-agent" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.885567 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="sg-core" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.885574 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="sg-core" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.886071 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-notification-agent" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.886107 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-notification-agent" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.886126 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="proxy-httpd" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.886134 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="proxy-httpd" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.886388 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-notification-agent" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.886420 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="ceilometer-central-agent" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.886434 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="sg-core" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.886450 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" containerName="proxy-httpd" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.892542 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.895568 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.895822 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.899020 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.908679 4823 scope.go:117] "RemoveContainer" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.911999 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": container with ID starting with c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5 not found: ID does not exist" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.912046 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5"} err="failed to get container status \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": rpc error: code = NotFound desc = could not find container \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": container with ID starting with c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.912076 4823 scope.go:117] "RemoveContainer" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.912476 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": container with ID starting with aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41 not found: ID does not exist" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.912499 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41"} err="failed to get container status \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": rpc error: code = NotFound desc = could not find container \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": container with ID starting with aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.912519 4823 scope.go:117] "RemoveContainer" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.912948 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": container with ID starting with 042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58 not found: ID does not exist" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.912969 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58"} err="failed to get container status \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": rpc error: code = NotFound desc = could not find container \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": container with ID starting with 042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.912985 4823 scope.go:117] "RemoveContainer" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" Jan 21 17:38:48 crc kubenswrapper[4823]: E0121 17:38:48.913587 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": container with ID starting with 99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03 not found: ID does not exist" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.913630 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03"} err="failed to get container status \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": rpc error: code = NotFound desc = could not find container \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": container with ID starting with 99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.913668 4823 scope.go:117] "RemoveContainer" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914041 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5"} err="failed to get container status \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": rpc error: code = NotFound desc = could not find container \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": container with ID starting with c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914075 4823 scope.go:117] "RemoveContainer" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914301 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41"} err="failed to get container status \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": rpc error: code = NotFound desc = could not find container \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": container with ID starting with aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914325 4823 scope.go:117] "RemoveContainer" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914539 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58"} err="failed to get container status \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": rpc error: code = NotFound desc = could not find container \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": container with ID starting with 042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914561 4823 scope.go:117] "RemoveContainer" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914796 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03"} err="failed to get container status \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": rpc error: code = NotFound desc = could not find container \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": container with ID starting with 99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.914821 4823 scope.go:117] "RemoveContainer" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.916114 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5"} err="failed to get container status \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": rpc error: code = NotFound desc = could not find container \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": container with ID starting with c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.916141 4823 scope.go:117] "RemoveContainer" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.916491 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41"} err="failed to get container status \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": rpc error: code = NotFound desc = could not find container \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": container with ID starting with aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.916507 4823 scope.go:117] "RemoveContainer" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.920069 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58"} err="failed to get container status \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": rpc error: code = NotFound desc = could not find container \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": container with ID starting with 042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.920118 4823 scope.go:117] "RemoveContainer" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.921707 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03"} err="failed to get container status \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": rpc error: code = NotFound desc = could not find container \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": container with ID starting with 99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.921810 4823 scope.go:117] "RemoveContainer" containerID="c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.922843 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5"} err="failed to get container status \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": rpc error: code = NotFound desc = could not find container \"c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5\": container with ID starting with c8229276d3c471112fab7abe5a7f864e739f3719a69cfbbb38a766b333280ec5 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.922904 4823 scope.go:117] "RemoveContainer" containerID="aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.923254 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41"} err="failed to get container status \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": rpc error: code = NotFound desc = could not find container \"aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41\": container with ID starting with aad55e83767d565ab75afb25cd7beadca1a7ae00de0d2409d11735eaceb70f41 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.923306 4823 scope.go:117] "RemoveContainer" containerID="042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.926080 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58"} err="failed to get container status \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": rpc error: code = NotFound desc = could not find container \"042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58\": container with ID starting with 042d232b53b73c118e00e31ba048fe8b4b5f4b8f1101c9dc52e76face3ba6f58 not found: ID does not exist" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.926135 4823 scope.go:117] "RemoveContainer" containerID="99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03" Jan 21 17:38:48 crc kubenswrapper[4823]: I0121 17:38:48.928424 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03"} err="failed to get container status \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": rpc error: code = NotFound desc = could not find container \"99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03\": container with ID starting with 99c168806f8e6d2b2b3ed83557bf0f8996fda89b35de75c5dce074aa61118f03 not found: ID does not exist" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-log-httpd\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046382 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-scripts\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046469 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-run-httpd\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-config-data\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046551 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47cr\" (UniqueName: \"kubernetes.io/projected/df473052-4fc3-4aa6-bb64-0af38b4b5a90-kube-api-access-f47cr\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.046611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148310 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-run-httpd\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148357 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-config-data\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148402 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47cr\" (UniqueName: \"kubernetes.io/projected/df473052-4fc3-4aa6-bb64-0af38b4b5a90-kube-api-access-f47cr\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148447 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148518 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-log-httpd\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148556 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148595 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-scripts\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.148732 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-run-httpd\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.149277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-log-httpd\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.154172 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.154350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.154418 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-config-data\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.154554 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-scripts\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.173300 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47cr\" (UniqueName: \"kubernetes.io/projected/df473052-4fc3-4aa6-bb64-0af38b4b5a90-kube-api-access-f47cr\") pod \"ceilometer-0\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.217257 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.359740 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8560d5dd-20f2-49c0-8a9e-34e6aef87a5e" path="/var/lib/kubelet/pods/8560d5dd-20f2-49c0-8a9e-34e6aef87a5e/volumes" Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.681256 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:38:49 crc kubenswrapper[4823]: W0121 17:38:49.681426 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf473052_4fc3_4aa6_bb64_0af38b4b5a90.slice/crio-1a584b0945b7acb3a1e5ed6cc7eed49df2d4ddd10a1e7fb94d937bd4b970b100 WatchSource:0}: Error finding container 1a584b0945b7acb3a1e5ed6cc7eed49df2d4ddd10a1e7fb94d937bd4b970b100: Status 404 returned error can't find the container with id 1a584b0945b7acb3a1e5ed6cc7eed49df2d4ddd10a1e7fb94d937bd4b970b100 Jan 21 17:38:49 crc kubenswrapper[4823]: I0121 17:38:49.849752 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerStarted","Data":"1a584b0945b7acb3a1e5ed6cc7eed49df2d4ddd10a1e7fb94d937bd4b970b100"} Jan 21 17:38:50 crc kubenswrapper[4823]: I0121 17:38:50.860882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerStarted","Data":"fb591984babe13e1edb02133aaabe1ad1e1cfe44795d01f4750db83e912a3e9a"} Jan 21 17:38:51 crc kubenswrapper[4823]: I0121 17:38:51.872623 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerStarted","Data":"b8469449494a1db3eb710992352aec61bc08f87fc4f989dbc595a4a3850e6d72"} Jan 21 17:38:52 crc kubenswrapper[4823]: I0121 17:38:52.885741 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerStarted","Data":"92da10e890b7660c54a86612c87ca4fdc5f9a466a32bace8f5700dfc3c3f540b"} Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.325066 4823 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod92e4d2b6-046c-42ff-afcc-2dda2abe61cc"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod92e4d2b6-046c-42ff-afcc-2dda2abe61cc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod92e4d2b6_046c_42ff_afcc_2dda2abe61cc.slice" Jan 21 17:38:53 crc kubenswrapper[4823]: E0121 17:38:53.325127 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod92e4d2b6-046c-42ff-afcc-2dda2abe61cc] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod92e4d2b6-046c-42ff-afcc-2dda2abe61cc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod92e4d2b6_046c_42ff_afcc_2dda2abe61cc.slice" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.899369 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r9rjd" Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.900773 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerStarted","Data":"04ece8353dc44d09b007b1edc35ceae74be13a61b6024285484ce7f2d7f2357c"} Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.901092 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.920466 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.595914027 podStartE2EDuration="5.920446981s" podCreationTimestamp="2026-01-21 17:38:48 +0000 UTC" firstStartedPulling="2026-01-21 17:38:49.684439596 +0000 UTC m=+1330.610570456" lastFinishedPulling="2026-01-21 17:38:53.00897255 +0000 UTC m=+1333.935103410" observedRunningTime="2026-01-21 17:38:53.917316424 +0000 UTC m=+1334.843447294" watchObservedRunningTime="2026-01-21 17:38:53.920446981 +0000 UTC m=+1334.846577851" Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.967255 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r9rjd"] Jan 21 17:38:53 crc kubenswrapper[4823]: I0121 17:38:53.977133 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r9rjd"] Jan 21 17:38:55 crc kubenswrapper[4823]: I0121 17:38:55.354529 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e4d2b6-046c-42ff-afcc-2dda2abe61cc" path="/var/lib/kubelet/pods/92e4d2b6-046c-42ff-afcc-2dda2abe61cc/volumes" Jan 21 17:38:58 crc kubenswrapper[4823]: I0121 17:38:58.944405 4823 generic.go:334] "Generic (PLEG): container finished" podID="223be500-ec7c-4380-9001-8f80d0f799f2" containerID="802c97b99c54a1d566f9b886c15ff1806c668f1ee2cb9c2a22b1305cbf79980c" exitCode=0 Jan 21 17:38:58 crc kubenswrapper[4823]: I0121 17:38:58.944510 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" event={"ID":"223be500-ec7c-4380-9001-8f80d0f799f2","Type":"ContainerDied","Data":"802c97b99c54a1d566f9b886c15ff1806c668f1ee2cb9c2a22b1305cbf79980c"} Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.376319 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.511065 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-config-data\") pod \"223be500-ec7c-4380-9001-8f80d0f799f2\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.511223 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b9bf\" (UniqueName: \"kubernetes.io/projected/223be500-ec7c-4380-9001-8f80d0f799f2-kube-api-access-9b9bf\") pod \"223be500-ec7c-4380-9001-8f80d0f799f2\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.511444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-combined-ca-bundle\") pod \"223be500-ec7c-4380-9001-8f80d0f799f2\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.511538 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-scripts\") pod \"223be500-ec7c-4380-9001-8f80d0f799f2\" (UID: \"223be500-ec7c-4380-9001-8f80d0f799f2\") " Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.516987 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223be500-ec7c-4380-9001-8f80d0f799f2-kube-api-access-9b9bf" (OuterVolumeSpecName: "kube-api-access-9b9bf") pod "223be500-ec7c-4380-9001-8f80d0f799f2" (UID: "223be500-ec7c-4380-9001-8f80d0f799f2"). InnerVolumeSpecName "kube-api-access-9b9bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.517313 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-scripts" (OuterVolumeSpecName: "scripts") pod "223be500-ec7c-4380-9001-8f80d0f799f2" (UID: "223be500-ec7c-4380-9001-8f80d0f799f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.541029 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "223be500-ec7c-4380-9001-8f80d0f799f2" (UID: "223be500-ec7c-4380-9001-8f80d0f799f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.561053 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-config-data" (OuterVolumeSpecName: "config-data") pod "223be500-ec7c-4380-9001-8f80d0f799f2" (UID: "223be500-ec7c-4380-9001-8f80d0f799f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.614239 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.614282 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.614301 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b9bf\" (UniqueName: \"kubernetes.io/projected/223be500-ec7c-4380-9001-8f80d0f799f2-kube-api-access-9b9bf\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.614316 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223be500-ec7c-4380-9001-8f80d0f799f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.969771 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" event={"ID":"223be500-ec7c-4380-9001-8f80d0f799f2","Type":"ContainerDied","Data":"6edb5c32e14273a50ec2b5790c0f6c570d1388f0f2cc2f8be4c9e6d2bf752e0f"} Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.970111 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6edb5c32e14273a50ec2b5790c0f6c570d1388f0f2cc2f8be4c9e6d2bf752e0f" Jan 21 17:39:00 crc kubenswrapper[4823]: I0121 17:39:00.969897 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qs8v9" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.078629 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:01 crc kubenswrapper[4823]: E0121 17:39:01.079254 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223be500-ec7c-4380-9001-8f80d0f799f2" containerName="nova-cell0-conductor-db-sync" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.079281 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="223be500-ec7c-4380-9001-8f80d0f799f2" containerName="nova-cell0-conductor-db-sync" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.079519 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="223be500-ec7c-4380-9001-8f80d0f799f2" containerName="nova-cell0-conductor-db-sync" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.080448 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.083436 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.086085 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftrqd" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.092869 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.126012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.126090 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.126185 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lf2\" (UniqueName: \"kubernetes.io/projected/d3cfc561-33c5-4b82-aa71-ed02926fabaf-kube-api-access-h6lf2\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.227986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.228141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.228428 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lf2\" (UniqueName: \"kubernetes.io/projected/d3cfc561-33c5-4b82-aa71-ed02926fabaf-kube-api-access-h6lf2\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.234892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.235391 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.247145 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lf2\" (UniqueName: \"kubernetes.io/projected/d3cfc561-33c5-4b82-aa71-ed02926fabaf-kube-api-access-h6lf2\") pod \"nova-cell0-conductor-0\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.408140 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.936721 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:01 crc kubenswrapper[4823]: I0121 17:39:01.983531 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d3cfc561-33c5-4b82-aa71-ed02926fabaf","Type":"ContainerStarted","Data":"27d4bb960d3c679655dcad8363ab776c60cb1d6dcabf990867103bc6556b4b59"} Jan 21 17:39:02 crc kubenswrapper[4823]: I0121 17:39:02.998146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d3cfc561-33c5-4b82-aa71-ed02926fabaf","Type":"ContainerStarted","Data":"c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c"} Jan 21 17:39:02 crc kubenswrapper[4823]: I0121 17:39:02.999878 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:03 crc kubenswrapper[4823]: I0121 17:39:03.024344 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.024325903 podStartE2EDuration="2.024325903s" podCreationTimestamp="2026-01-21 17:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:03.018253583 +0000 UTC m=+1343.944384463" watchObservedRunningTime="2026-01-21 17:39:03.024325903 +0000 UTC m=+1343.950456763" Jan 21 17:39:05 crc kubenswrapper[4823]: I0121 17:39:05.301783 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:05 crc kubenswrapper[4823]: I0121 17:39:05.312950 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:39:05 crc kubenswrapper[4823]: I0121 17:39:05.313238 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="639e3107-061d-4225-9344-7f2e0a3099b8" containerName="watcher-decision-engine" containerID="cri-o://03aef47c3e5fb27c19f3f3276813a1825d29fa376ce9aaa7338ff130f2a69e61" gracePeriod=30 Jan 21 17:39:06 crc kubenswrapper[4823]: I0121 17:39:06.037756 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="d3cfc561-33c5-4b82-aa71-ed02926fabaf" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c" gracePeriod=30 Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.326248 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.326766 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="proxy-httpd" containerID="cri-o://04ece8353dc44d09b007b1edc35ceae74be13a61b6024285484ce7f2d7f2357c" gracePeriod=30 Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.326829 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-notification-agent" containerID="cri-o://b8469449494a1db3eb710992352aec61bc08f87fc4f989dbc595a4a3850e6d72" gracePeriod=30 Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.326814 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="sg-core" containerID="cri-o://92da10e890b7660c54a86612c87ca4fdc5f9a466a32bace8f5700dfc3c3f540b" gracePeriod=30 Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.327049 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-central-agent" containerID="cri-o://fb591984babe13e1edb02133aaabe1ad1e1cfe44795d01f4750db83e912a3e9a" gracePeriod=30 Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.351024 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": EOF" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.796665 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.894527 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lf2\" (UniqueName: \"kubernetes.io/projected/d3cfc561-33c5-4b82-aa71-ed02926fabaf-kube-api-access-h6lf2\") pod \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.894696 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-combined-ca-bundle\") pod \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.894732 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-config-data\") pod \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\" (UID: \"d3cfc561-33c5-4b82-aa71-ed02926fabaf\") " Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.903148 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3cfc561-33c5-4b82-aa71-ed02926fabaf-kube-api-access-h6lf2" (OuterVolumeSpecName: "kube-api-access-h6lf2") pod "d3cfc561-33c5-4b82-aa71-ed02926fabaf" (UID: "d3cfc561-33c5-4b82-aa71-ed02926fabaf"). InnerVolumeSpecName "kube-api-access-h6lf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.928102 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3cfc561-33c5-4b82-aa71-ed02926fabaf" (UID: "d3cfc561-33c5-4b82-aa71-ed02926fabaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.935100 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-config-data" (OuterVolumeSpecName: "config-data") pod "d3cfc561-33c5-4b82-aa71-ed02926fabaf" (UID: "d3cfc561-33c5-4b82-aa71-ed02926fabaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.997407 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.997442 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cfc561-33c5-4b82-aa71-ed02926fabaf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:07 crc kubenswrapper[4823]: I0121 17:39:07.997454 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lf2\" (UniqueName: \"kubernetes.io/projected/d3cfc561-33c5-4b82-aa71-ed02926fabaf-kube-api-access-h6lf2\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.064211 4823 generic.go:334] "Generic (PLEG): container finished" podID="d3cfc561-33c5-4b82-aa71-ed02926fabaf" containerID="c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c" exitCode=0 Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.064289 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d3cfc561-33c5-4b82-aa71-ed02926fabaf","Type":"ContainerDied","Data":"c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c"} Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.064319 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d3cfc561-33c5-4b82-aa71-ed02926fabaf","Type":"ContainerDied","Data":"27d4bb960d3c679655dcad8363ab776c60cb1d6dcabf990867103bc6556b4b59"} Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.064337 4823 scope.go:117] "RemoveContainer" containerID="c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.064476 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077152 4823 generic.go:334] "Generic (PLEG): container finished" podID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerID="04ece8353dc44d09b007b1edc35ceae74be13a61b6024285484ce7f2d7f2357c" exitCode=0 Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077204 4823 generic.go:334] "Generic (PLEG): container finished" podID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerID="92da10e890b7660c54a86612c87ca4fdc5f9a466a32bace8f5700dfc3c3f540b" exitCode=2 Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077214 4823 generic.go:334] "Generic (PLEG): container finished" podID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerID="b8469449494a1db3eb710992352aec61bc08f87fc4f989dbc595a4a3850e6d72" exitCode=0 Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077234 4823 generic.go:334] "Generic (PLEG): container finished" podID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerID="fb591984babe13e1edb02133aaabe1ad1e1cfe44795d01f4750db83e912a3e9a" exitCode=0 Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077262 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerDied","Data":"04ece8353dc44d09b007b1edc35ceae74be13a61b6024285484ce7f2d7f2357c"} Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077295 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerDied","Data":"92da10e890b7660c54a86612c87ca4fdc5f9a466a32bace8f5700dfc3c3f540b"} Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077310 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerDied","Data":"b8469449494a1db3eb710992352aec61bc08f87fc4f989dbc595a4a3850e6d72"} Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.077322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerDied","Data":"fb591984babe13e1edb02133aaabe1ad1e1cfe44795d01f4750db83e912a3e9a"} Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.100169 4823 scope.go:117] "RemoveContainer" containerID="c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c" Jan 21 17:39:08 crc kubenswrapper[4823]: E0121 17:39:08.100721 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c\": container with ID starting with c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c not found: ID does not exist" containerID="c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.100759 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c"} err="failed to get container status \"c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c\": rpc error: code = NotFound desc = could not find container \"c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c\": container with ID starting with c8882812fca1b3111868e99e2ad3a3c0e09171952d5b6a5bde4b1c498824b15c not found: ID does not exist" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.111874 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.123733 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.140130 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:08 crc kubenswrapper[4823]: E0121 17:39:08.141114 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cfc561-33c5-4b82-aa71-ed02926fabaf" containerName="nova-cell0-conductor-conductor" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.141216 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cfc561-33c5-4b82-aa71-ed02926fabaf" containerName="nova-cell0-conductor-conductor" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.141507 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cfc561-33c5-4b82-aa71-ed02926fabaf" containerName="nova-cell0-conductor-conductor" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.154038 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.157171 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftrqd" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.157420 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.158768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.305250 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfzc\" (UniqueName: \"kubernetes.io/projected/ed613878-64dc-4f97-a498-8ef220d2d17e-kube-api-access-5pfzc\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.305373 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed613878-64dc-4f97-a498-8ef220d2d17e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.305422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed613878-64dc-4f97-a498-8ef220d2d17e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.405522 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.407896 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed613878-64dc-4f97-a498-8ef220d2d17e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.408108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pfzc\" (UniqueName: \"kubernetes.io/projected/ed613878-64dc-4f97-a498-8ef220d2d17e-kube-api-access-5pfzc\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.408201 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed613878-64dc-4f97-a498-8ef220d2d17e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.413476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed613878-64dc-4f97-a498-8ef220d2d17e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.414799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed613878-64dc-4f97-a498-8ef220d2d17e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.440223 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pfzc\" (UniqueName: \"kubernetes.io/projected/ed613878-64dc-4f97-a498-8ef220d2d17e-kube-api-access-5pfzc\") pod \"nova-cell0-conductor-0\" (UID: \"ed613878-64dc-4f97-a498-8ef220d2d17e\") " pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.471610 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-scripts\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-run-httpd\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513366 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-sg-core-conf-yaml\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513419 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f47cr\" (UniqueName: \"kubernetes.io/projected/df473052-4fc3-4aa6-bb64-0af38b4b5a90-kube-api-access-f47cr\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513580 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-config-data\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513662 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-log-httpd\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513706 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-combined-ca-bundle\") pod \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\" (UID: \"df473052-4fc3-4aa6-bb64-0af38b4b5a90\") " Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.513933 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.514980 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.515455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.518377 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-scripts" (OuterVolumeSpecName: "scripts") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.533573 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df473052-4fc3-4aa6-bb64-0af38b4b5a90-kube-api-access-f47cr" (OuterVolumeSpecName: "kube-api-access-f47cr") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "kube-api-access-f47cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.543821 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.605793 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.621359 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.621397 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.621406 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.621416 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f47cr\" (UniqueName: \"kubernetes.io/projected/df473052-4fc3-4aa6-bb64-0af38b4b5a90-kube-api-access-f47cr\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.621427 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df473052-4fc3-4aa6-bb64-0af38b4b5a90-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.691407 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-config-data" (OuterVolumeSpecName: "config-data") pod "df473052-4fc3-4aa6-bb64-0af38b4b5a90" (UID: "df473052-4fc3-4aa6-bb64-0af38b4b5a90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:08 crc kubenswrapper[4823]: I0121 17:39:08.723433 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df473052-4fc3-4aa6-bb64-0af38b4b5a90-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.054905 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 17:39:09 crc kubenswrapper[4823]: W0121 17:39:09.061668 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded613878_64dc_4f97_a498_8ef220d2d17e.slice/crio-4af7f2aa5fe8ee7fa8244e4bdfd63cd82f5bd9517d7de7a658a2c075b708e83e WatchSource:0}: Error finding container 4af7f2aa5fe8ee7fa8244e4bdfd63cd82f5bd9517d7de7a658a2c075b708e83e: Status 404 returned error can't find the container with id 4af7f2aa5fe8ee7fa8244e4bdfd63cd82f5bd9517d7de7a658a2c075b708e83e Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.089829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ed613878-64dc-4f97-a498-8ef220d2d17e","Type":"ContainerStarted","Data":"4af7f2aa5fe8ee7fa8244e4bdfd63cd82f5bd9517d7de7a658a2c075b708e83e"} Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.096303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df473052-4fc3-4aa6-bb64-0af38b4b5a90","Type":"ContainerDied","Data":"1a584b0945b7acb3a1e5ed6cc7eed49df2d4ddd10a1e7fb94d937bd4b970b100"} Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.096370 4823 scope.go:117] "RemoveContainer" containerID="04ece8353dc44d09b007b1edc35ceae74be13a61b6024285484ce7f2d7f2357c" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.096435 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.122493 4823 scope.go:117] "RemoveContainer" containerID="92da10e890b7660c54a86612c87ca4fdc5f9a466a32bace8f5700dfc3c3f540b" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.147481 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.162600 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.178210 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:09 crc kubenswrapper[4823]: E0121 17:39:09.179017 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-notification-agent" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.179037 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-notification-agent" Jan 21 17:39:09 crc kubenswrapper[4823]: E0121 17:39:09.179056 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="sg-core" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.179062 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="sg-core" Jan 21 17:39:09 crc kubenswrapper[4823]: E0121 17:39:09.179090 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-central-agent" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.179097 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-central-agent" Jan 21 17:39:09 crc kubenswrapper[4823]: E0121 17:39:09.179116 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="proxy-httpd" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.179122 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="proxy-httpd" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.179955 4823 scope.go:117] "RemoveContainer" containerID="b8469449494a1db3eb710992352aec61bc08f87fc4f989dbc595a4a3850e6d72" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.180167 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-central-agent" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.180190 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="ceilometer-notification-agent" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.180201 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="proxy-httpd" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.180220 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" containerName="sg-core" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.183957 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.186696 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.187047 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.187624 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.218695 4823 scope.go:117] "RemoveContainer" containerID="fb591984babe13e1edb02133aaabe1ad1e1cfe44795d01f4750db83e912a3e9a" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351268 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-scripts\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-config-data\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351392 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-run-httpd\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351431 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351489 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-log-httpd\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351565 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.351620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9r2p\" (UniqueName: \"kubernetes.io/projected/750bc099-66ce-41b7-9d5b-5e3872154ff8-kube-api-access-d9r2p\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.383320 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cfc561-33c5-4b82-aa71-ed02926fabaf" path="/var/lib/kubelet/pods/d3cfc561-33c5-4b82-aa71-ed02926fabaf/volumes" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.383930 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df473052-4fc3-4aa6-bb64-0af38b4b5a90" path="/var/lib/kubelet/pods/df473052-4fc3-4aa6-bb64-0af38b4b5a90/volumes" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453390 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9r2p\" (UniqueName: \"kubernetes.io/projected/750bc099-66ce-41b7-9d5b-5e3872154ff8-kube-api-access-d9r2p\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453459 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-scripts\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-config-data\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-run-httpd\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.453641 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-log-httpd\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.454130 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-run-httpd\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.454224 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-log-httpd\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.466815 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.467765 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.478933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-scripts\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.482993 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-config-data\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.489107 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9r2p\" (UniqueName: \"kubernetes.io/projected/750bc099-66ce-41b7-9d5b-5e3872154ff8-kube-api-access-d9r2p\") pod \"ceilometer-0\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.523444 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:09 crc kubenswrapper[4823]: I0121 17:39:09.970164 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:09 crc kubenswrapper[4823]: W0121 17:39:09.988558 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod750bc099_66ce_41b7_9d5b_5e3872154ff8.slice/crio-f12bb6305eff55b0554d303ec36801e95d7a0d62c9e678fbe7c9670fb10bb4e2 WatchSource:0}: Error finding container f12bb6305eff55b0554d303ec36801e95d7a0d62c9e678fbe7c9670fb10bb4e2: Status 404 returned error can't find the container with id f12bb6305eff55b0554d303ec36801e95d7a0d62c9e678fbe7c9670fb10bb4e2 Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.115373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ed613878-64dc-4f97-a498-8ef220d2d17e","Type":"ContainerStarted","Data":"ef2c3c69f241c7e41c23b4427b54733a8cb393abef3f8e9f459b00a2122f5863"} Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.115510 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.124929 4823 generic.go:334] "Generic (PLEG): container finished" podID="639e3107-061d-4225-9344-7f2e0a3099b8" containerID="03aef47c3e5fb27c19f3f3276813a1825d29fa376ce9aaa7338ff130f2a69e61" exitCode=0 Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.124960 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"639e3107-061d-4225-9344-7f2e0a3099b8","Type":"ContainerDied","Data":"03aef47c3e5fb27c19f3f3276813a1825d29fa376ce9aaa7338ff130f2a69e61"} Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.127046 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerStarted","Data":"f12bb6305eff55b0554d303ec36801e95d7a0d62c9e678fbe7c9670fb10bb4e2"} Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.150323 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.150300428 podStartE2EDuration="2.150300428s" podCreationTimestamp="2026-01-21 17:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:10.136715783 +0000 UTC m=+1351.062846663" watchObservedRunningTime="2026-01-21 17:39:10.150300428 +0000 UTC m=+1351.076431298" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.270083 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.275194 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-custom-prometheus-ca\") pod \"639e3107-061d-4225-9344-7f2e0a3099b8\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.275272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-combined-ca-bundle\") pod \"639e3107-061d-4225-9344-7f2e0a3099b8\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.316097 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "639e3107-061d-4225-9344-7f2e0a3099b8" (UID: "639e3107-061d-4225-9344-7f2e0a3099b8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.331387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "639e3107-061d-4225-9344-7f2e0a3099b8" (UID: "639e3107-061d-4225-9344-7f2e0a3099b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.384050 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-config-data\") pod \"639e3107-061d-4225-9344-7f2e0a3099b8\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.384183 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639e3107-061d-4225-9344-7f2e0a3099b8-logs\") pod \"639e3107-061d-4225-9344-7f2e0a3099b8\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.384228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgzb5\" (UniqueName: \"kubernetes.io/projected/639e3107-061d-4225-9344-7f2e0a3099b8-kube-api-access-xgzb5\") pod \"639e3107-061d-4225-9344-7f2e0a3099b8\" (UID: \"639e3107-061d-4225-9344-7f2e0a3099b8\") " Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.384692 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/639e3107-061d-4225-9344-7f2e0a3099b8-logs" (OuterVolumeSpecName: "logs") pod "639e3107-061d-4225-9344-7f2e0a3099b8" (UID: "639e3107-061d-4225-9344-7f2e0a3099b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.385147 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639e3107-061d-4225-9344-7f2e0a3099b8-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.385175 4823 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.385192 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.388046 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639e3107-061d-4225-9344-7f2e0a3099b8-kube-api-access-xgzb5" (OuterVolumeSpecName: "kube-api-access-xgzb5") pod "639e3107-061d-4225-9344-7f2e0a3099b8" (UID: "639e3107-061d-4225-9344-7f2e0a3099b8"). InnerVolumeSpecName "kube-api-access-xgzb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.459970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-config-data" (OuterVolumeSpecName: "config-data") pod "639e3107-061d-4225-9344-7f2e0a3099b8" (UID: "639e3107-061d-4225-9344-7f2e0a3099b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.487275 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639e3107-061d-4225-9344-7f2e0a3099b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:10 crc kubenswrapper[4823]: I0121 17:39:10.487325 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgzb5\" (UniqueName: \"kubernetes.io/projected/639e3107-061d-4225-9344-7f2e0a3099b8-kube-api-access-xgzb5\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.141090 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerStarted","Data":"cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89"} Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.143372 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.144148 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"639e3107-061d-4225-9344-7f2e0a3099b8","Type":"ContainerDied","Data":"4c6006200738b4996e2ea020c68ac2549f8ac578d9f0900a278db113e5de9a22"} Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.144204 4823 scope.go:117] "RemoveContainer" containerID="03aef47c3e5fb27c19f3f3276813a1825d29fa376ce9aaa7338ff130f2a69e61" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.217355 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.247065 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.287062 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:39:11 crc kubenswrapper[4823]: E0121 17:39:11.287694 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639e3107-061d-4225-9344-7f2e0a3099b8" containerName="watcher-decision-engine" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.287883 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="639e3107-061d-4225-9344-7f2e0a3099b8" containerName="watcher-decision-engine" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.288161 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="639e3107-061d-4225-9344-7f2e0a3099b8" containerName="watcher-decision-engine" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.289103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.293059 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.356107 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.356223 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.356329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5x88\" (UniqueName: \"kubernetes.io/projected/f25102d9-f15b-4887-82a3-7380b9d3d062-kube-api-access-c5x88\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.356453 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.356489 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25102d9-f15b-4887-82a3-7380b9d3d062-logs\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.402174 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639e3107-061d-4225-9344-7f2e0a3099b8" path="/var/lib/kubelet/pods/639e3107-061d-4225-9344-7f2e0a3099b8/volumes" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.402713 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.459800 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.459984 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5x88\" (UniqueName: \"kubernetes.io/projected/f25102d9-f15b-4887-82a3-7380b9d3d062-kube-api-access-c5x88\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.460098 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.460131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25102d9-f15b-4887-82a3-7380b9d3d062-logs\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.460317 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.461567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25102d9-f15b-4887-82a3-7380b9d3d062-logs\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.465667 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.469946 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.470395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f25102d9-f15b-4887-82a3-7380b9d3d062-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.489505 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5x88\" (UniqueName: \"kubernetes.io/projected/f25102d9-f15b-4887-82a3-7380b9d3d062-kube-api-access-c5x88\") pod \"watcher-decision-engine-0\" (UID: \"f25102d9-f15b-4887-82a3-7380b9d3d062\") " pod="openstack/watcher-decision-engine-0" Jan 21 17:39:11 crc kubenswrapper[4823]: I0121 17:39:11.695955 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:12 crc kubenswrapper[4823]: I0121 17:39:12.159320 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerStarted","Data":"00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d"} Jan 21 17:39:12 crc kubenswrapper[4823]: I0121 17:39:12.159954 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerStarted","Data":"5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983"} Jan 21 17:39:12 crc kubenswrapper[4823]: W0121 17:39:12.278551 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf25102d9_f15b_4887_82a3_7380b9d3d062.slice/crio-b762e7a8d3e6af7e4652e9e13b0e4b05892269404f6230b2450e7dd8648430de WatchSource:0}: Error finding container b762e7a8d3e6af7e4652e9e13b0e4b05892269404f6230b2450e7dd8648430de: Status 404 returned error can't find the container with id b762e7a8d3e6af7e4652e9e13b0e4b05892269404f6230b2450e7dd8648430de Jan 21 17:39:12 crc kubenswrapper[4823]: I0121 17:39:12.279834 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 17:39:13 crc kubenswrapper[4823]: I0121 17:39:13.172438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f25102d9-f15b-4887-82a3-7380b9d3d062","Type":"ContainerStarted","Data":"6f9622e7558fd2509ffd989e993d281479cec6d709245d2c9229b830f640c750"} Jan 21 17:39:13 crc kubenswrapper[4823]: I0121 17:39:13.172778 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f25102d9-f15b-4887-82a3-7380b9d3d062","Type":"ContainerStarted","Data":"b762e7a8d3e6af7e4652e9e13b0e4b05892269404f6230b2450e7dd8648430de"} Jan 21 17:39:13 crc kubenswrapper[4823]: I0121 17:39:13.203675 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.2036547300000002 podStartE2EDuration="2.20365473s" podCreationTimestamp="2026-01-21 17:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:13.193334426 +0000 UTC m=+1354.119465286" watchObservedRunningTime="2026-01-21 17:39:13.20365473 +0000 UTC m=+1354.129785600" Jan 21 17:39:14 crc kubenswrapper[4823]: I0121 17:39:14.184316 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerStarted","Data":"c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87"} Jan 21 17:39:14 crc kubenswrapper[4823]: I0121 17:39:14.184678 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:39:14 crc kubenswrapper[4823]: I0121 17:39:14.210280 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9154100170000001 podStartE2EDuration="5.210258218s" podCreationTimestamp="2026-01-21 17:39:09 +0000 UTC" firstStartedPulling="2026-01-21 17:39:09.991823358 +0000 UTC m=+1350.917954218" lastFinishedPulling="2026-01-21 17:39:13.286671569 +0000 UTC m=+1354.212802419" observedRunningTime="2026-01-21 17:39:14.203972523 +0000 UTC m=+1355.130103393" watchObservedRunningTime="2026-01-21 17:39:14.210258218 +0000 UTC m=+1355.136389078" Jan 21 17:39:15 crc kubenswrapper[4823]: I0121 17:39:15.071029 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:39:15 crc kubenswrapper[4823]: I0121 17:39:15.071374 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:39:15 crc kubenswrapper[4823]: I0121 17:39:15.071442 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:39:15 crc kubenswrapper[4823]: I0121 17:39:15.072464 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5cf874111542cda3e34240991e3ef5c73b1f1132ce5389832d5612a1548617a"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:39:15 crc kubenswrapper[4823]: I0121 17:39:15.072552 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://e5cf874111542cda3e34240991e3ef5c73b1f1132ce5389832d5612a1548617a" gracePeriod=600 Jan 21 17:39:16 crc kubenswrapper[4823]: I0121 17:39:16.244004 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="e5cf874111542cda3e34240991e3ef5c73b1f1132ce5389832d5612a1548617a" exitCode=0 Jan 21 17:39:16 crc kubenswrapper[4823]: I0121 17:39:16.244076 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"e5cf874111542cda3e34240991e3ef5c73b1f1132ce5389832d5612a1548617a"} Jan 21 17:39:16 crc kubenswrapper[4823]: I0121 17:39:16.244592 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb"} Jan 21 17:39:16 crc kubenswrapper[4823]: I0121 17:39:16.244620 4823 scope.go:117] "RemoveContainer" containerID="bf25751a26ff3c64f8ae67c52c13c550034b8f3dcc6b86f0b444f95206ccf684" Jan 21 17:39:18 crc kubenswrapper[4823]: I0121 17:39:18.505616 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.220256 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qdxxs"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.222225 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.226579 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.227272 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.242998 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdxxs"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.325363 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-scripts\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.325654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj955\" (UniqueName: \"kubernetes.io/projected/2d463e08-e2af-4555-825a-b3913bf13d03-kube-api-access-wj955\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.325704 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-config-data\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.325747 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.427286 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.429154 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.429848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj955\" (UniqueName: \"kubernetes.io/projected/2d463e08-e2af-4555-825a-b3913bf13d03-kube-api-access-wj955\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.429991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-config-data\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.430039 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.430159 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-scripts\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.433207 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.439386 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-config-data\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.439748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.443785 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.443968 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-scripts\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.508630 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj955\" (UniqueName: \"kubernetes.io/projected/2d463e08-e2af-4555-825a-b3913bf13d03-kube-api-access-wj955\") pod \"nova-cell0-cell-mapping-qdxxs\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.531880 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24t55\" (UniqueName: \"kubernetes.io/projected/d08918db-40e1-4228-a271-0f07ad9ebf45-kube-api-access-24t55\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.531925 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.532004 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-config-data\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.541321 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.543885 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.549297 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.562342 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.570336 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.607151 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.609658 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.611910 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.644680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnn7n\" (UniqueName: \"kubernetes.io/projected/b24558b7-c596-47e8-8902-888664d7a7b0-kube-api-access-fnn7n\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.644749 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-config-data\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.644814 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-config-data\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.644900 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24558b7-c596-47e8-8902-888664d7a7b0-logs\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.645052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24t55\" (UniqueName: \"kubernetes.io/projected/d08918db-40e1-4228-a271-0f07ad9ebf45-kube-api-access-24t55\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.645082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.645131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.656465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.674840 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-config-data\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.702944 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24t55\" (UniqueName: \"kubernetes.io/projected/d08918db-40e1-4228-a271-0f07ad9ebf45-kube-api-access-24t55\") pod \"nova-scheduler-0\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.709056 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.735508 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.747719 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8zj\" (UniqueName: \"kubernetes.io/projected/3c8d29e2-8285-4057-a337-cbf285bf789c-kube-api-access-9l8zj\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.747776 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8d29e2-8285-4057-a337-cbf285bf789c-logs\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.747817 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.747884 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24558b7-c596-47e8-8902-888664d7a7b0-logs\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.747927 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-config-data\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.748021 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.748052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnn7n\" (UniqueName: \"kubernetes.io/projected/b24558b7-c596-47e8-8902-888664d7a7b0-kube-api-access-fnn7n\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.748075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-config-data\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.753017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-config-data\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.754742 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24558b7-c596-47e8-8902-888664d7a7b0-logs\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.756194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.774831 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.781550 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.785072 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.795262 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnn7n\" (UniqueName: \"kubernetes.io/projected/b24558b7-c596-47e8-8902-888664d7a7b0-kube-api-access-fnn7n\") pod \"nova-api-0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " pod="openstack/nova-api-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.829875 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.845785 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-nj8rj"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.848580 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850240 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8zj\" (UniqueName: \"kubernetes.io/projected/3c8d29e2-8285-4057-a337-cbf285bf789c-kube-api-access-9l8zj\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8d29e2-8285-4057-a337-cbf285bf789c-logs\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850353 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8fb\" (UniqueName: \"kubernetes.io/projected/83ca2470-f750-4500-9e1c-0e96383fd1ca-kube-api-access-rl8fb\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850390 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-config-data\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.850452 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.852689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8d29e2-8285-4057-a337-cbf285bf789c-logs\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.858722 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.860557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-config-data\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.864033 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-nj8rj"] Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.879903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8zj\" (UniqueName: \"kubernetes.io/projected/3c8d29e2-8285-4057-a337-cbf285bf789c-kube-api-access-9l8zj\") pod \"nova-metadata-0\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " pod="openstack/nova-metadata-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.952966 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953076 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8zl6\" (UniqueName: \"kubernetes.io/projected/e71ed82f-a626-4fad-b864-da1b1ff313b9-kube-api-access-c8zl6\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953134 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl8fb\" (UniqueName: \"kubernetes.io/projected/83ca2470-f750-4500-9e1c-0e96383fd1ca-kube-api-access-rl8fb\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953282 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953331 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953435 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.953611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-config\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.954145 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.994287 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl8fb\" (UniqueName: \"kubernetes.io/projected/83ca2470-f750-4500-9e1c-0e96383fd1ca-kube-api-access-rl8fb\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.994500 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:19 crc kubenswrapper[4823]: I0121 17:39:19.994529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.041725 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.053630 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.057749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.058103 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-config\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.058436 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.058522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.058578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8zl6\" (UniqueName: \"kubernetes.io/projected/e71ed82f-a626-4fad-b864-da1b1ff313b9-kube-api-access-c8zl6\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.058605 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.069032 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.069398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.069832 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.069915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-config\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.074029 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.105636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8zl6\" (UniqueName: \"kubernetes.io/projected/e71ed82f-a626-4fad-b864-da1b1ff313b9-kube-api-access-c8zl6\") pod \"dnsmasq-dns-bccf8f775-nj8rj\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.165219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.214839 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.404106 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c4mnt"] Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.419336 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.430733 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c4mnt"] Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.435839 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.436443 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.441745 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdxxs"] Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.492306 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-scripts\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.492491 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-config-data\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.492581 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.492613 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2zb\" (UniqueName: \"kubernetes.io/projected/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-kube-api-access-tt2zb\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.595176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-config-data\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.595600 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.595625 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2zb\" (UniqueName: \"kubernetes.io/projected/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-kube-api-access-tt2zb\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.595740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-scripts\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.603017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.603154 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.604919 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-config-data\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.605335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-scripts\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.635278 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2zb\" (UniqueName: \"kubernetes.io/projected/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-kube-api-access-tt2zb\") pod \"nova-cell1-conductor-db-sync-c4mnt\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.819696 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.845376 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.930602 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:20 crc kubenswrapper[4823]: I0121 17:39:20.950818 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:20 crc kubenswrapper[4823]: W0121 17:39:20.961409 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83ca2470_f750_4500_9e1c_0e96383fd1ca.slice/crio-0eb7eed396378d22810574f792a03797402700ba1c7b5c8e4d6fc4c15d8feadf WatchSource:0}: Error finding container 0eb7eed396378d22810574f792a03797402700ba1c7b5c8e4d6fc4c15d8feadf: Status 404 returned error can't find the container with id 0eb7eed396378d22810574f792a03797402700ba1c7b5c8e4d6fc4c15d8feadf Jan 21 17:39:20 crc kubenswrapper[4823]: W0121 17:39:20.966446 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c8d29e2_8285_4057_a337_cbf285bf789c.slice/crio-e861bc1a5cd0b0e188bdf7b1492de2901f67a1e37008c2483c3f4e4fed051da1 WatchSource:0}: Error finding container e861bc1a5cd0b0e188bdf7b1492de2901f67a1e37008c2483c3f4e4fed051da1: Status 404 returned error can't find the container with id e861bc1a5cd0b0e188bdf7b1492de2901f67a1e37008c2483c3f4e4fed051da1 Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.085588 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-nj8rj"] Jan 21 17:39:21 crc kubenswrapper[4823]: W0121 17:39:21.089472 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode71ed82f_a626_4fad_b864_da1b1ff313b9.slice/crio-bb3b0e3f4ab5e63579ff76dac8d1013ffe2d5dcba7498d502657c9176c152aea WatchSource:0}: Error finding container bb3b0e3f4ab5e63579ff76dac8d1013ffe2d5dcba7498d502657c9176c152aea: Status 404 returned error can't find the container with id bb3b0e3f4ab5e63579ff76dac8d1013ffe2d5dcba7498d502657c9176c152aea Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.395902 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c4mnt"] Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.465199 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ca2470-f750-4500-9e1c-0e96383fd1ca","Type":"ContainerStarted","Data":"0eb7eed396378d22810574f792a03797402700ba1c7b5c8e4d6fc4c15d8feadf"} Jan 21 17:39:21 crc kubenswrapper[4823]: W0121 17:39:21.466755 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9b75c81_1f2c_4b8f_a95a_98ba021bb41f.slice/crio-fef9244f51c89a9aeb3161e61d1b687e185073623880f16d5f60c2ca1db66169 WatchSource:0}: Error finding container fef9244f51c89a9aeb3161e61d1b687e185073623880f16d5f60c2ca1db66169: Status 404 returned error can't find the container with id fef9244f51c89a9aeb3161e61d1b687e185073623880f16d5f60c2ca1db66169 Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.477767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" event={"ID":"e71ed82f-a626-4fad-b864-da1b1ff313b9","Type":"ContainerStarted","Data":"bb3b0e3f4ab5e63579ff76dac8d1013ffe2d5dcba7498d502657c9176c152aea"} Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.480111 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8d29e2-8285-4057-a337-cbf285bf789c","Type":"ContainerStarted","Data":"e861bc1a5cd0b0e188bdf7b1492de2901f67a1e37008c2483c3f4e4fed051da1"} Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.495151 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b24558b7-c596-47e8-8902-888664d7a7b0","Type":"ContainerStarted","Data":"93cf412b3bd56b00732bcc099aed10236c145dbdec7b6748b8b75679f963e8d5"} Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.519831 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdxxs" event={"ID":"2d463e08-e2af-4555-825a-b3913bf13d03","Type":"ContainerStarted","Data":"47ffdff1507a2f04120a0a573cd9bb4aa164a79a34c73a08ab0db3de8dc8cbd0"} Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.519910 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdxxs" event={"ID":"2d463e08-e2af-4555-825a-b3913bf13d03","Type":"ContainerStarted","Data":"27dfedaee221c73f3686d66b3b82dd2fde4f20aeb305ca2a5e4d0be6a6c7c86a"} Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.529975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08918db-40e1-4228-a271-0f07ad9ebf45","Type":"ContainerStarted","Data":"b2bd23b3e9356f76b42d7c62ee72603dfdf831bbc6420c399b00aed13983a343"} Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.555499 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qdxxs" podStartSLOduration=2.5554798439999997 podStartE2EDuration="2.555479844s" podCreationTimestamp="2026-01-21 17:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:21.545168379 +0000 UTC m=+1362.471299239" watchObservedRunningTime="2026-01-21 17:39:21.555479844 +0000 UTC m=+1362.481610704" Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.702186 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:21 crc kubenswrapper[4823]: I0121 17:39:21.757354 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:22 crc kubenswrapper[4823]: E0121 17:39:22.056554 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode71ed82f_a626_4fad_b864_da1b1ff313b9.slice/crio-493ebb5d778f47a5ce5a0ba7f04ee97e663aec64f26568eb1e8cd1f661657852.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode71ed82f_a626_4fad_b864_da1b1ff313b9.slice/crio-conmon-493ebb5d778f47a5ce5a0ba7f04ee97e663aec64f26568eb1e8cd1f661657852.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.560432 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" event={"ID":"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f","Type":"ContainerStarted","Data":"3e9311278f3e286f8c34db74363d66e17e79c70aeffe845de02c3060993331f8"} Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.561238 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" event={"ID":"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f","Type":"ContainerStarted","Data":"fef9244f51c89a9aeb3161e61d1b687e185073623880f16d5f60c2ca1db66169"} Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.566091 4823 generic.go:334] "Generic (PLEG): container finished" podID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerID="493ebb5d778f47a5ce5a0ba7f04ee97e663aec64f26568eb1e8cd1f661657852" exitCode=0 Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.567647 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" event={"ID":"e71ed82f-a626-4fad-b864-da1b1ff313b9","Type":"ContainerDied","Data":"493ebb5d778f47a5ce5a0ba7f04ee97e663aec64f26568eb1e8cd1f661657852"} Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.567681 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.603844 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" podStartSLOduration=2.603816925 podStartE2EDuration="2.603816925s" podCreationTimestamp="2026-01-21 17:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:22.578601001 +0000 UTC m=+1363.504731861" watchObservedRunningTime="2026-01-21 17:39:22.603816925 +0000 UTC m=+1363.529947785" Jan 21 17:39:22 crc kubenswrapper[4823]: I0121 17:39:22.665636 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 17:39:23 crc kubenswrapper[4823]: I0121 17:39:23.028887 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:23 crc kubenswrapper[4823]: I0121 17:39:23.086971 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.604907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8d29e2-8285-4057-a337-cbf285bf789c","Type":"ContainerStarted","Data":"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.606456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8d29e2-8285-4057-a337-cbf285bf789c","Type":"ContainerStarted","Data":"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.605076 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-metadata" containerID="cri-o://5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623" gracePeriod=30 Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.605020 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-log" containerID="cri-o://9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575" gracePeriod=30 Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.607001 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ca2470-f750-4500-9e1c-0e96383fd1ca","Type":"ContainerStarted","Data":"eaefc5586c43d70f4e94b669b4a9b80639b39c77fb44e8ef98c8ea1c6e88fa77"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.607438 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="83ca2470-f750-4500-9e1c-0e96383fd1ca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://eaefc5586c43d70f4e94b669b4a9b80639b39c77fb44e8ef98c8ea1c6e88fa77" gracePeriod=30 Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.610828 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" event={"ID":"e71ed82f-a626-4fad-b864-da1b1ff313b9","Type":"ContainerStarted","Data":"ff45895a391b5d30a8161fa7d5444301c9afd4a451346899b1ce655582fcc7f8"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.611084 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.614829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b24558b7-c596-47e8-8902-888664d7a7b0","Type":"ContainerStarted","Data":"34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.614891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b24558b7-c596-47e8-8902-888664d7a7b0","Type":"ContainerStarted","Data":"5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.617038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08918db-40e1-4228-a271-0f07ad9ebf45","Type":"ContainerStarted","Data":"2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a"} Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.640183 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.3623024 podStartE2EDuration="7.640162596s" podCreationTimestamp="2026-01-21 17:39:19 +0000 UTC" firstStartedPulling="2026-01-21 17:39:20.969152197 +0000 UTC m=+1361.895283057" lastFinishedPulling="2026-01-21 17:39:25.247012393 +0000 UTC m=+1366.173143253" observedRunningTime="2026-01-21 17:39:26.637168642 +0000 UTC m=+1367.563299502" watchObservedRunningTime="2026-01-21 17:39:26.640162596 +0000 UTC m=+1367.566293466" Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.664809 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.391625755 podStartE2EDuration="7.664778545s" podCreationTimestamp="2026-01-21 17:39:19 +0000 UTC" firstStartedPulling="2026-01-21 17:39:20.968527251 +0000 UTC m=+1361.894658111" lastFinishedPulling="2026-01-21 17:39:25.241680041 +0000 UTC m=+1366.167810901" observedRunningTime="2026-01-21 17:39:26.656795938 +0000 UTC m=+1367.582926798" watchObservedRunningTime="2026-01-21 17:39:26.664778545 +0000 UTC m=+1367.590909415" Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.678297 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.021080393 podStartE2EDuration="7.678274449s" podCreationTimestamp="2026-01-21 17:39:19 +0000 UTC" firstStartedPulling="2026-01-21 17:39:20.583230984 +0000 UTC m=+1361.509361844" lastFinishedPulling="2026-01-21 17:39:25.24042504 +0000 UTC m=+1366.166555900" observedRunningTime="2026-01-21 17:39:26.674601828 +0000 UTC m=+1367.600732688" watchObservedRunningTime="2026-01-21 17:39:26.678274449 +0000 UTC m=+1367.604405309" Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.696937 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.273445962 podStartE2EDuration="7.69691897s" podCreationTimestamp="2026-01-21 17:39:19 +0000 UTC" firstStartedPulling="2026-01-21 17:39:20.817463964 +0000 UTC m=+1361.743594824" lastFinishedPulling="2026-01-21 17:39:25.240936972 +0000 UTC m=+1366.167067832" observedRunningTime="2026-01-21 17:39:26.69649493 +0000 UTC m=+1367.622625800" watchObservedRunningTime="2026-01-21 17:39:26.69691897 +0000 UTC m=+1367.623049820" Jan 21 17:39:26 crc kubenswrapper[4823]: I0121 17:39:26.719503 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" podStartSLOduration=7.719485118 podStartE2EDuration="7.719485118s" podCreationTimestamp="2026-01-21 17:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:26.716468984 +0000 UTC m=+1367.642599844" watchObservedRunningTime="2026-01-21 17:39:26.719485118 +0000 UTC m=+1367.645615978" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.211753 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.281079 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l8zj\" (UniqueName: \"kubernetes.io/projected/3c8d29e2-8285-4057-a337-cbf285bf789c-kube-api-access-9l8zj\") pod \"3c8d29e2-8285-4057-a337-cbf285bf789c\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.281158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-combined-ca-bundle\") pod \"3c8d29e2-8285-4057-a337-cbf285bf789c\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.281433 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8d29e2-8285-4057-a337-cbf285bf789c-logs\") pod \"3c8d29e2-8285-4057-a337-cbf285bf789c\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.281527 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-config-data\") pod \"3c8d29e2-8285-4057-a337-cbf285bf789c\" (UID: \"3c8d29e2-8285-4057-a337-cbf285bf789c\") " Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.283257 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8d29e2-8285-4057-a337-cbf285bf789c-logs" (OuterVolumeSpecName: "logs") pod "3c8d29e2-8285-4057-a337-cbf285bf789c" (UID: "3c8d29e2-8285-4057-a337-cbf285bf789c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.301457 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8d29e2-8285-4057-a337-cbf285bf789c-kube-api-access-9l8zj" (OuterVolumeSpecName: "kube-api-access-9l8zj") pod "3c8d29e2-8285-4057-a337-cbf285bf789c" (UID: "3c8d29e2-8285-4057-a337-cbf285bf789c"). InnerVolumeSpecName "kube-api-access-9l8zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.317056 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-config-data" (OuterVolumeSpecName: "config-data") pod "3c8d29e2-8285-4057-a337-cbf285bf789c" (UID: "3c8d29e2-8285-4057-a337-cbf285bf789c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.318222 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c8d29e2-8285-4057-a337-cbf285bf789c" (UID: "3c8d29e2-8285-4057-a337-cbf285bf789c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.384196 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l8zj\" (UniqueName: \"kubernetes.io/projected/3c8d29e2-8285-4057-a337-cbf285bf789c-kube-api-access-9l8zj\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.384545 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.384610 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8d29e2-8285-4057-a337-cbf285bf789c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.384674 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8d29e2-8285-4057-a337-cbf285bf789c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.646119 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerID="5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623" exitCode=0 Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.647446 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerID="9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575" exitCode=143 Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.646194 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.646170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8d29e2-8285-4057-a337-cbf285bf789c","Type":"ContainerDied","Data":"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623"} Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.647780 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8d29e2-8285-4057-a337-cbf285bf789c","Type":"ContainerDied","Data":"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575"} Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.647816 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8d29e2-8285-4057-a337-cbf285bf789c","Type":"ContainerDied","Data":"e861bc1a5cd0b0e188bdf7b1492de2901f67a1e37008c2483c3f4e4fed051da1"} Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.647840 4823 scope.go:117] "RemoveContainer" containerID="5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.679198 4823 scope.go:117] "RemoveContainer" containerID="9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.689181 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.705024 4823 scope.go:117] "RemoveContainer" containerID="5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.712911 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:27 crc kubenswrapper[4823]: E0121 17:39:27.714517 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623\": container with ID starting with 5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623 not found: ID does not exist" containerID="5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.714589 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623"} err="failed to get container status \"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623\": rpc error: code = NotFound desc = could not find container \"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623\": container with ID starting with 5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623 not found: ID does not exist" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.714619 4823 scope.go:117] "RemoveContainer" containerID="9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575" Jan 21 17:39:27 crc kubenswrapper[4823]: E0121 17:39:27.714959 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575\": container with ID starting with 9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575 not found: ID does not exist" containerID="9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.714982 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575"} err="failed to get container status \"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575\": rpc error: code = NotFound desc = could not find container \"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575\": container with ID starting with 9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575 not found: ID does not exist" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.714996 4823 scope.go:117] "RemoveContainer" containerID="5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.715415 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623"} err="failed to get container status \"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623\": rpc error: code = NotFound desc = could not find container \"5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623\": container with ID starting with 5bbd15daff9ddb7411f3915890ecdf95dec4618e751cd59e480e605fbd2bb623 not found: ID does not exist" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.715437 4823 scope.go:117] "RemoveContainer" containerID="9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.717027 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575"} err="failed to get container status \"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575\": rpc error: code = NotFound desc = could not find container \"9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575\": container with ID starting with 9be5ccc54da59e6e360b169c9a308ea561f4da79459a0a1141f7e5f93d4de575 not found: ID does not exist" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.727709 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:27 crc kubenswrapper[4823]: E0121 17:39:27.728323 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-log" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.728344 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-log" Jan 21 17:39:27 crc kubenswrapper[4823]: E0121 17:39:27.728355 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-metadata" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.728363 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-metadata" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.728559 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-log" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.728581 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" containerName="nova-metadata-metadata" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.729707 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.732984 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.733771 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.762407 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.793262 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.793347 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flx8p\" (UniqueName: \"kubernetes.io/projected/3843ed64-c43c-492c-969b-11777369a972-kube-api-access-flx8p\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.793370 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-config-data\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.793484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3843ed64-c43c-492c-969b-11777369a972-logs\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.793512 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.895411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.895478 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flx8p\" (UniqueName: \"kubernetes.io/projected/3843ed64-c43c-492c-969b-11777369a972-kube-api-access-flx8p\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.895501 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-config-data\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.895562 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3843ed64-c43c-492c-969b-11777369a972-logs\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.895581 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.896719 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3843ed64-c43c-492c-969b-11777369a972-logs\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.901770 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.902769 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-config-data\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.903609 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:27 crc kubenswrapper[4823]: I0121 17:39:27.920156 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flx8p\" (UniqueName: \"kubernetes.io/projected/3843ed64-c43c-492c-969b-11777369a972-kube-api-access-flx8p\") pod \"nova-metadata-0\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " pod="openstack/nova-metadata-0" Jan 21 17:39:28 crc kubenswrapper[4823]: I0121 17:39:28.052716 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:28 crc kubenswrapper[4823]: I0121 17:39:28.570890 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:28 crc kubenswrapper[4823]: I0121 17:39:28.660696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3843ed64-c43c-492c-969b-11777369a972","Type":"ContainerStarted","Data":"78f3ed695307c272ebac05e65d04f27c32fb71eb3bac6f7c6a0206d469bebb86"} Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.357985 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8d29e2-8285-4057-a337-cbf285bf789c" path="/var/lib/kubelet/pods/3c8d29e2-8285-4057-a337-cbf285bf789c/volumes" Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.671767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3843ed64-c43c-492c-969b-11777369a972","Type":"ContainerStarted","Data":"dd9735fe172d4ca48365a3d3b60cc16dd88ba03cb529123b625f5899a74a61b0"} Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.671807 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3843ed64-c43c-492c-969b-11777369a972","Type":"ContainerStarted","Data":"a3db2f1691733271b9a21da0564bfe68c93b66d92087cb6af8268a4ca7b7da2d"} Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.696282 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.696263666 podStartE2EDuration="2.696263666s" podCreationTimestamp="2026-01-21 17:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:29.694209945 +0000 UTC m=+1370.620340825" watchObservedRunningTime="2026-01-21 17:39:29.696263666 +0000 UTC m=+1370.622394546" Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.737103 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.737166 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 17:39:29 crc kubenswrapper[4823]: I0121 17:39:29.767655 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.042152 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.042209 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.201845 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.217105 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.280828 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-z9rh9"] Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.281129 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="dnsmasq-dns" containerID="cri-o://93363557d18397df2c98de6c06c1b60e298c77c92e8bb189d84b97673ce87e58" gracePeriod=10 Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.689736 4823 generic.go:334] "Generic (PLEG): container finished" podID="2d463e08-e2af-4555-825a-b3913bf13d03" containerID="47ffdff1507a2f04120a0a573cd9bb4aa164a79a34c73a08ab0db3de8dc8cbd0" exitCode=0 Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.689814 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdxxs" event={"ID":"2d463e08-e2af-4555-825a-b3913bf13d03","Type":"ContainerDied","Data":"47ffdff1507a2f04120a0a573cd9bb4aa164a79a34c73a08ab0db3de8dc8cbd0"} Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.703318 4823 generic.go:334] "Generic (PLEG): container finished" podID="e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" containerID="3e9311278f3e286f8c34db74363d66e17e79c70aeffe845de02c3060993331f8" exitCode=0 Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.703424 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" event={"ID":"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f","Type":"ContainerDied","Data":"3e9311278f3e286f8c34db74363d66e17e79c70aeffe845de02c3060993331f8"} Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.708622 4823 generic.go:334] "Generic (PLEG): container finished" podID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerID="93363557d18397df2c98de6c06c1b60e298c77c92e8bb189d84b97673ce87e58" exitCode=0 Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.709001 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" event={"ID":"be8be361-2dd3-4515-83ee-509176ed3eb9","Type":"ContainerDied","Data":"93363557d18397df2c98de6c06c1b60e298c77c92e8bb189d84b97673ce87e58"} Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.764522 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.848118 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.959758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-nb\") pod \"be8be361-2dd3-4515-83ee-509176ed3eb9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.959812 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-swift-storage-0\") pod \"be8be361-2dd3-4515-83ee-509176ed3eb9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.959918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlvqk\" (UniqueName: \"kubernetes.io/projected/be8be361-2dd3-4515-83ee-509176ed3eb9-kube-api-access-rlvqk\") pod \"be8be361-2dd3-4515-83ee-509176ed3eb9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.960776 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-config\") pod \"be8be361-2dd3-4515-83ee-509176ed3eb9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.960807 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-sb\") pod \"be8be361-2dd3-4515-83ee-509176ed3eb9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " Jan 21 17:39:30 crc kubenswrapper[4823]: I0121 17:39:30.960834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-svc\") pod \"be8be361-2dd3-4515-83ee-509176ed3eb9\" (UID: \"be8be361-2dd3-4515-83ee-509176ed3eb9\") " Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.005607 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8be361-2dd3-4515-83ee-509176ed3eb9-kube-api-access-rlvqk" (OuterVolumeSpecName: "kube-api-access-rlvqk") pod "be8be361-2dd3-4515-83ee-509176ed3eb9" (UID: "be8be361-2dd3-4515-83ee-509176ed3eb9"). InnerVolumeSpecName "kube-api-access-rlvqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.043541 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-config" (OuterVolumeSpecName: "config") pod "be8be361-2dd3-4515-83ee-509176ed3eb9" (UID: "be8be361-2dd3-4515-83ee-509176ed3eb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.054547 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be8be361-2dd3-4515-83ee-509176ed3eb9" (UID: "be8be361-2dd3-4515-83ee-509176ed3eb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.063685 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlvqk\" (UniqueName: \"kubernetes.io/projected/be8be361-2dd3-4515-83ee-509176ed3eb9-kube-api-access-rlvqk\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.063726 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.063739 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.073291 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be8be361-2dd3-4515-83ee-509176ed3eb9" (UID: "be8be361-2dd3-4515-83ee-509176ed3eb9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.083174 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.086031 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be8be361-2dd3-4515-83ee-509176ed3eb9" (UID: "be8be361-2dd3-4515-83ee-509176ed3eb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.108530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be8be361-2dd3-4515-83ee-509176ed3eb9" (UID: "be8be361-2dd3-4515-83ee-509176ed3eb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.124156 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.165677 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.165724 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.165767 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8be361-2dd3-4515-83ee-509176ed3eb9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.718802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" event={"ID":"be8be361-2dd3-4515-83ee-509176ed3eb9","Type":"ContainerDied","Data":"a90641493e806eed74310c43899621fe3d4251430fe4e313074b759407d3acf3"} Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.718868 4823 scope.go:117] "RemoveContainer" containerID="93363557d18397df2c98de6c06c1b60e298c77c92e8bb189d84b97673ce87e58" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.718918 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.758500 4823 scope.go:117] "RemoveContainer" containerID="b1eeeddf6630256db257081ce06a11f270b03b21d79fbf2ac111acf799ef38b6" Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.770042 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-z9rh9"] Jan 21 17:39:31 crc kubenswrapper[4823]: I0121 17:39:31.785767 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-z9rh9"] Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.240089 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.247594 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-config-data\") pod \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284603 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt2zb\" (UniqueName: \"kubernetes.io/projected/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-kube-api-access-tt2zb\") pod \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284654 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-config-data\") pod \"2d463e08-e2af-4555-825a-b3913bf13d03\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284694 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-combined-ca-bundle\") pod \"2d463e08-e2af-4555-825a-b3913bf13d03\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284726 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-scripts\") pod \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj955\" (UniqueName: \"kubernetes.io/projected/2d463e08-e2af-4555-825a-b3913bf13d03-kube-api-access-wj955\") pod \"2d463e08-e2af-4555-825a-b3913bf13d03\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284899 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-scripts\") pod \"2d463e08-e2af-4555-825a-b3913bf13d03\" (UID: \"2d463e08-e2af-4555-825a-b3913bf13d03\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.284945 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-combined-ca-bundle\") pod \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\" (UID: \"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f\") " Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.294176 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-scripts" (OuterVolumeSpecName: "scripts") pod "2d463e08-e2af-4555-825a-b3913bf13d03" (UID: "2d463e08-e2af-4555-825a-b3913bf13d03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.295067 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d463e08-e2af-4555-825a-b3913bf13d03-kube-api-access-wj955" (OuterVolumeSpecName: "kube-api-access-wj955") pod "2d463e08-e2af-4555-825a-b3913bf13d03" (UID: "2d463e08-e2af-4555-825a-b3913bf13d03"). InnerVolumeSpecName "kube-api-access-wj955". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.298165 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-kube-api-access-tt2zb" (OuterVolumeSpecName: "kube-api-access-tt2zb") pod "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" (UID: "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f"). InnerVolumeSpecName "kube-api-access-tt2zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.309281 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-scripts" (OuterVolumeSpecName: "scripts") pod "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" (UID: "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.315089 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d463e08-e2af-4555-825a-b3913bf13d03" (UID: "2d463e08-e2af-4555-825a-b3913bf13d03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.316732 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-config-data" (OuterVolumeSpecName: "config-data") pod "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" (UID: "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.321108 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" (UID: "e9b75c81-1f2c-4b8f-a95a-98ba021bb41f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.346667 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-config-data" (OuterVolumeSpecName: "config-data") pod "2d463e08-e2af-4555-825a-b3913bf13d03" (UID: "2d463e08-e2af-4555-825a-b3913bf13d03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388254 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388299 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388312 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388326 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj955\" (UniqueName: \"kubernetes.io/projected/2d463e08-e2af-4555-825a-b3913bf13d03-kube-api-access-wj955\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388337 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d463e08-e2af-4555-825a-b3913bf13d03-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388350 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388361 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.388373 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt2zb\" (UniqueName: \"kubernetes.io/projected/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f-kube-api-access-tt2zb\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.733181 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.733335 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c4mnt" event={"ID":"e9b75c81-1f2c-4b8f-a95a-98ba021bb41f","Type":"ContainerDied","Data":"fef9244f51c89a9aeb3161e61d1b687e185073623880f16d5f60c2ca1db66169"} Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.735617 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef9244f51c89a9aeb3161e61d1b687e185073623880f16d5f60c2ca1db66169" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.738937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdxxs" event={"ID":"2d463e08-e2af-4555-825a-b3913bf13d03","Type":"ContainerDied","Data":"27dfedaee221c73f3686d66b3b82dd2fde4f20aeb305ca2a5e4d0be6a6c7c86a"} Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.739181 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27dfedaee221c73f3686d66b3b82dd2fde4f20aeb305ca2a5e4d0be6a6c7c86a" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.739402 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdxxs" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.842520 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 17:39:32 crc kubenswrapper[4823]: E0121 17:39:32.843015 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" containerName="nova-cell1-conductor-db-sync" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843033 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" containerName="nova-cell1-conductor-db-sync" Jan 21 17:39:32 crc kubenswrapper[4823]: E0121 17:39:32.843067 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d463e08-e2af-4555-825a-b3913bf13d03" containerName="nova-manage" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843074 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d463e08-e2af-4555-825a-b3913bf13d03" containerName="nova-manage" Jan 21 17:39:32 crc kubenswrapper[4823]: E0121 17:39:32.843088 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="init" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843095 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="init" Jan 21 17:39:32 crc kubenswrapper[4823]: E0121 17:39:32.843107 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="dnsmasq-dns" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843113 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="dnsmasq-dns" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843337 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d463e08-e2af-4555-825a-b3913bf13d03" containerName="nova-manage" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843357 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="dnsmasq-dns" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.843370 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" containerName="nova-cell1-conductor-db-sync" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.844076 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.846523 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.856830 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.901159 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df646a74-5ad5-41ea-8ef1-ab4f6287d876-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.901233 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvv2\" (UniqueName: \"kubernetes.io/projected/df646a74-5ad5-41ea-8ef1-ab4f6287d876-kube-api-access-9xvv2\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.901287 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df646a74-5ad5-41ea-8ef1-ab4f6287d876-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.919450 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.919670 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-log" containerID="cri-o://5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2" gracePeriod=30 Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.920091 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-api" containerID="cri-o://34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a" gracePeriod=30 Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.937025 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:32 crc kubenswrapper[4823]: I0121 17:39:32.937209 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d08918db-40e1-4228-a271-0f07ad9ebf45" containerName="nova-scheduler-scheduler" containerID="cri-o://2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a" gracePeriod=30 Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.002614 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df646a74-5ad5-41ea-8ef1-ab4f6287d876-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.002785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df646a74-5ad5-41ea-8ef1-ab4f6287d876-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.002829 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvv2\" (UniqueName: \"kubernetes.io/projected/df646a74-5ad5-41ea-8ef1-ab4f6287d876-kube-api-access-9xvv2\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.008040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df646a74-5ad5-41ea-8ef1-ab4f6287d876-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.009478 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df646a74-5ad5-41ea-8ef1-ab4f6287d876-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.031437 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvv2\" (UniqueName: \"kubernetes.io/projected/df646a74-5ad5-41ea-8ef1-ab4f6287d876-kube-api-access-9xvv2\") pod \"nova-cell1-conductor-0\" (UID: \"df646a74-5ad5-41ea-8ef1-ab4f6287d876\") " pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.053301 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.053349 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.131519 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.162772 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.362962 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" path="/var/lib/kubelet/pods/be8be361-2dd3-4515-83ee-509176ed3eb9/volumes" Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.575025 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.749941 4823 generic.go:334] "Generic (PLEG): container finished" podID="b24558b7-c596-47e8-8902-888664d7a7b0" containerID="5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2" exitCode=143 Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.750033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b24558b7-c596-47e8-8902-888664d7a7b0","Type":"ContainerDied","Data":"5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2"} Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.752605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"df646a74-5ad5-41ea-8ef1-ab4f6287d876","Type":"ContainerStarted","Data":"be5001acd4e9e9082ca297b1984d5676c0b6e658f97e6370cee0e05a7893b8af"} Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.752927 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-metadata" containerID="cri-o://dd9735fe172d4ca48365a3d3b60cc16dd88ba03cb529123b625f5899a74a61b0" gracePeriod=30 Jan 21 17:39:33 crc kubenswrapper[4823]: I0121 17:39:33.753057 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-log" containerID="cri-o://a3db2f1691733271b9a21da0564bfe68c93b66d92087cb6af8268a4ca7b7da2d" gracePeriod=30 Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.353309 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.436725 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-config-data\") pod \"d08918db-40e1-4228-a271-0f07ad9ebf45\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.436956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-combined-ca-bundle\") pod \"d08918db-40e1-4228-a271-0f07ad9ebf45\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.437067 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24t55\" (UniqueName: \"kubernetes.io/projected/d08918db-40e1-4228-a271-0f07ad9ebf45-kube-api-access-24t55\") pod \"d08918db-40e1-4228-a271-0f07ad9ebf45\" (UID: \"d08918db-40e1-4228-a271-0f07ad9ebf45\") " Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.444173 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08918db-40e1-4228-a271-0f07ad9ebf45-kube-api-access-24t55" (OuterVolumeSpecName: "kube-api-access-24t55") pod "d08918db-40e1-4228-a271-0f07ad9ebf45" (UID: "d08918db-40e1-4228-a271-0f07ad9ebf45"). InnerVolumeSpecName "kube-api-access-24t55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.465549 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d08918db-40e1-4228-a271-0f07ad9ebf45" (UID: "d08918db-40e1-4228-a271-0f07ad9ebf45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.475008 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-config-data" (OuterVolumeSpecName: "config-data") pod "d08918db-40e1-4228-a271-0f07ad9ebf45" (UID: "d08918db-40e1-4228-a271-0f07ad9ebf45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.539487 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24t55\" (UniqueName: \"kubernetes.io/projected/d08918db-40e1-4228-a271-0f07ad9ebf45-kube-api-access-24t55\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.539565 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.539585 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08918db-40e1-4228-a271-0f07ad9ebf45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.771636 4823 generic.go:334] "Generic (PLEG): container finished" podID="3843ed64-c43c-492c-969b-11777369a972" containerID="dd9735fe172d4ca48365a3d3b60cc16dd88ba03cb529123b625f5899a74a61b0" exitCode=0 Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.771961 4823 generic.go:334] "Generic (PLEG): container finished" podID="3843ed64-c43c-492c-969b-11777369a972" containerID="a3db2f1691733271b9a21da0564bfe68c93b66d92087cb6af8268a4ca7b7da2d" exitCode=143 Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.772007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3843ed64-c43c-492c-969b-11777369a972","Type":"ContainerDied","Data":"dd9735fe172d4ca48365a3d3b60cc16dd88ba03cb529123b625f5899a74a61b0"} Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.772032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3843ed64-c43c-492c-969b-11777369a972","Type":"ContainerDied","Data":"a3db2f1691733271b9a21da0564bfe68c93b66d92087cb6af8268a4ca7b7da2d"} Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.777261 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"df646a74-5ad5-41ea-8ef1-ab4f6287d876","Type":"ContainerStarted","Data":"b135e4cded46e57f5c0a54ffad10c78bd5133cd66110f831b61dd7af91ccb398"} Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.777558 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.780077 4823 generic.go:334] "Generic (PLEG): container finished" podID="d08918db-40e1-4228-a271-0f07ad9ebf45" containerID="2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a" exitCode=0 Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.780132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08918db-40e1-4228-a271-0f07ad9ebf45","Type":"ContainerDied","Data":"2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a"} Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.780164 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08918db-40e1-4228-a271-0f07ad9ebf45","Type":"ContainerDied","Data":"b2bd23b3e9356f76b42d7c62ee72603dfdf831bbc6420c399b00aed13983a343"} Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.780187 4823 scope.go:117] "RemoveContainer" containerID="2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.780189 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.795136 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.795115534 podStartE2EDuration="2.795115534s" podCreationTimestamp="2026-01-21 17:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:34.793878694 +0000 UTC m=+1375.720009564" watchObservedRunningTime="2026-01-21 17:39:34.795115534 +0000 UTC m=+1375.721246394" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.845300 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.851268 4823 scope.go:117] "RemoveContainer" containerID="2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a" Jan 21 17:39:34 crc kubenswrapper[4823]: E0121 17:39:34.853171 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a\": container with ID starting with 2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a not found: ID does not exist" containerID="2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.853219 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a"} err="failed to get container status \"2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a\": rpc error: code = NotFound desc = could not find container \"2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a\": container with ID starting with 2bcbe62806dbef9ff02bd023857749913b61a311ca433121d6e3460e376eb63a not found: ID does not exist" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.867422 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.879214 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:34 crc kubenswrapper[4823]: E0121 17:39:34.879694 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08918db-40e1-4228-a271-0f07ad9ebf45" containerName="nova-scheduler-scheduler" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.879712 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08918db-40e1-4228-a271-0f07ad9ebf45" containerName="nova-scheduler-scheduler" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.879952 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08918db-40e1-4228-a271-0f07ad9ebf45" containerName="nova-scheduler-scheduler" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.880915 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.883199 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.890209 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.948106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-config-data\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.948182 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.948273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxdr\" (UniqueName: \"kubernetes.io/projected/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-kube-api-access-rfxdr\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:34 crc kubenswrapper[4823]: I0121 17:39:34.962542 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052068 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-config-data\") pod \"3843ed64-c43c-492c-969b-11777369a972\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052209 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flx8p\" (UniqueName: \"kubernetes.io/projected/3843ed64-c43c-492c-969b-11777369a972-kube-api-access-flx8p\") pod \"3843ed64-c43c-492c-969b-11777369a972\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052265 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-combined-ca-bundle\") pod \"3843ed64-c43c-492c-969b-11777369a972\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052341 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-nova-metadata-tls-certs\") pod \"3843ed64-c43c-492c-969b-11777369a972\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052400 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3843ed64-c43c-492c-969b-11777369a972-logs\") pod \"3843ed64-c43c-492c-969b-11777369a972\" (UID: \"3843ed64-c43c-492c-969b-11777369a972\") " Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052588 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxdr\" (UniqueName: \"kubernetes.io/projected/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-kube-api-access-rfxdr\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052647 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-config-data\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.052706 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.053589 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3843ed64-c43c-492c-969b-11777369a972-logs" (OuterVolumeSpecName: "logs") pod "3843ed64-c43c-492c-969b-11777369a972" (UID: "3843ed64-c43c-492c-969b-11777369a972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.060315 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3843ed64-c43c-492c-969b-11777369a972-kube-api-access-flx8p" (OuterVolumeSpecName: "kube-api-access-flx8p") pod "3843ed64-c43c-492c-969b-11777369a972" (UID: "3843ed64-c43c-492c-969b-11777369a972"). InnerVolumeSpecName "kube-api-access-flx8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.060944 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.066618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-config-data\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.073398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxdr\" (UniqueName: \"kubernetes.io/projected/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-kube-api-access-rfxdr\") pod \"nova-scheduler-0\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.095961 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-config-data" (OuterVolumeSpecName: "config-data") pod "3843ed64-c43c-492c-969b-11777369a972" (UID: "3843ed64-c43c-492c-969b-11777369a972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.108141 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3843ed64-c43c-492c-969b-11777369a972" (UID: "3843ed64-c43c-492c-969b-11777369a972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.144417 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3843ed64-c43c-492c-969b-11777369a972" (UID: "3843ed64-c43c-492c-969b-11777369a972"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.159352 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.159399 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3843ed64-c43c-492c-969b-11777369a972-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.159415 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.159431 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flx8p\" (UniqueName: \"kubernetes.io/projected/3843ed64-c43c-492c-969b-11777369a972-kube-api-access-flx8p\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.159445 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843ed64-c43c-492c-969b-11777369a972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.275093 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.376145 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08918db-40e1-4228-a271-0f07ad9ebf45" path="/var/lib/kubelet/pods/d08918db-40e1-4228-a271-0f07ad9ebf45/volumes" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.730648 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-z9rh9" podUID="be8be361-2dd3-4515-83ee-509176ed3eb9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: i/o timeout" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.735549 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.796106 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b","Type":"ContainerStarted","Data":"20fd088a6859bf5e96855b7b9b8748056f4722dc24e383822d62850849f34e24"} Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.800596 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3843ed64-c43c-492c-969b-11777369a972","Type":"ContainerDied","Data":"78f3ed695307c272ebac05e65d04f27c32fb71eb3bac6f7c6a0206d469bebb86"} Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.800641 4823 scope.go:117] "RemoveContainer" containerID="dd9735fe172d4ca48365a3d3b60cc16dd88ba03cb529123b625f5899a74a61b0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.800822 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.890087 4823 scope.go:117] "RemoveContainer" containerID="a3db2f1691733271b9a21da0564bfe68c93b66d92087cb6af8268a4ca7b7da2d" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.918543 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.933130 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.945276 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:35 crc kubenswrapper[4823]: E0121 17:39:35.945801 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-metadata" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.945823 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-metadata" Jan 21 17:39:35 crc kubenswrapper[4823]: E0121 17:39:35.945846 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-log" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.945872 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-log" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.946105 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-metadata" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.946130 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3843ed64-c43c-492c-969b-11777369a972" containerName="nova-metadata-log" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.947346 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.949802 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.950036 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.956994 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.976013 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd250d7-7dc6-435f-acd6-77440e490b8e-logs\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.976069 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97c5r\" (UniqueName: \"kubernetes.io/projected/0bd250d7-7dc6-435f-acd6-77440e490b8e-kube-api-access-97c5r\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.976091 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.976142 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-config-data\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:35 crc kubenswrapper[4823]: I0121 17:39:35.976164 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.077262 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-config-data\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.077703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.077879 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd250d7-7dc6-435f-acd6-77440e490b8e-logs\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.077922 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.077949 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97c5r\" (UniqueName: \"kubernetes.io/projected/0bd250d7-7dc6-435f-acd6-77440e490b8e-kube-api-access-97c5r\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.078372 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd250d7-7dc6-435f-acd6-77440e490b8e-logs\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.082871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.091621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-config-data\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.095603 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.099452 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97c5r\" (UniqueName: \"kubernetes.io/projected/0bd250d7-7dc6-435f-acd6-77440e490b8e-kube-api-access-97c5r\") pod \"nova-metadata-0\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.267031 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.427859 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.589793 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-config-data\") pod \"b24558b7-c596-47e8-8902-888664d7a7b0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.589918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24558b7-c596-47e8-8902-888664d7a7b0-logs\") pod \"b24558b7-c596-47e8-8902-888664d7a7b0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.589994 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-combined-ca-bundle\") pod \"b24558b7-c596-47e8-8902-888664d7a7b0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.590019 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnn7n\" (UniqueName: \"kubernetes.io/projected/b24558b7-c596-47e8-8902-888664d7a7b0-kube-api-access-fnn7n\") pod \"b24558b7-c596-47e8-8902-888664d7a7b0\" (UID: \"b24558b7-c596-47e8-8902-888664d7a7b0\") " Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.590652 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24558b7-c596-47e8-8902-888664d7a7b0-logs" (OuterVolumeSpecName: "logs") pod "b24558b7-c596-47e8-8902-888664d7a7b0" (UID: "b24558b7-c596-47e8-8902-888664d7a7b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.594316 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24558b7-c596-47e8-8902-888664d7a7b0-kube-api-access-fnn7n" (OuterVolumeSpecName: "kube-api-access-fnn7n") pod "b24558b7-c596-47e8-8902-888664d7a7b0" (UID: "b24558b7-c596-47e8-8902-888664d7a7b0"). InnerVolumeSpecName "kube-api-access-fnn7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.623067 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b24558b7-c596-47e8-8902-888664d7a7b0" (UID: "b24558b7-c596-47e8-8902-888664d7a7b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.624168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-config-data" (OuterVolumeSpecName: "config-data") pod "b24558b7-c596-47e8-8902-888664d7a7b0" (UID: "b24558b7-c596-47e8-8902-888664d7a7b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.692477 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.692519 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24558b7-c596-47e8-8902-888664d7a7b0-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.692532 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24558b7-c596-47e8-8902-888664d7a7b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.692547 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnn7n\" (UniqueName: \"kubernetes.io/projected/b24558b7-c596-47e8-8902-888664d7a7b0-kube-api-access-fnn7n\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:36 crc kubenswrapper[4823]: W0121 17:39:36.758110 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd250d7_7dc6_435f_acd6_77440e490b8e.slice/crio-9b2bce08fddb42639db2c35647872f58b2198d408de1febd4cbc0d4f83099644 WatchSource:0}: Error finding container 9b2bce08fddb42639db2c35647872f58b2198d408de1febd4cbc0d4f83099644: Status 404 returned error can't find the container with id 9b2bce08fddb42639db2c35647872f58b2198d408de1febd4cbc0d4f83099644 Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.759016 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.810529 4823 generic.go:334] "Generic (PLEG): container finished" podID="b24558b7-c596-47e8-8902-888664d7a7b0" containerID="34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a" exitCode=0 Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.810582 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b24558b7-c596-47e8-8902-888664d7a7b0","Type":"ContainerDied","Data":"34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a"} Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.810642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b24558b7-c596-47e8-8902-888664d7a7b0","Type":"ContainerDied","Data":"93cf412b3bd56b00732bcc099aed10236c145dbdec7b6748b8b75679f963e8d5"} Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.810606 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.810668 4823 scope.go:117] "RemoveContainer" containerID="34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.817563 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b","Type":"ContainerStarted","Data":"3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198"} Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.820328 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bd250d7-7dc6-435f-acd6-77440e490b8e","Type":"ContainerStarted","Data":"9b2bce08fddb42639db2c35647872f58b2198d408de1febd4cbc0d4f83099644"} Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.842104 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.842088077 podStartE2EDuration="2.842088077s" podCreationTimestamp="2026-01-21 17:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:36.834270974 +0000 UTC m=+1377.760401854" watchObservedRunningTime="2026-01-21 17:39:36.842088077 +0000 UTC m=+1377.768218937" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.866401 4823 scope.go:117] "RemoveContainer" containerID="5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.889932 4823 scope.go:117] "RemoveContainer" containerID="34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a" Jan 21 17:39:36 crc kubenswrapper[4823]: E0121 17:39:36.890586 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a\": container with ID starting with 34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a not found: ID does not exist" containerID="34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.890701 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a"} err="failed to get container status \"34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a\": rpc error: code = NotFound desc = could not find container \"34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a\": container with ID starting with 34aaf2cb934fbba0c1fa32f8072d71080ee9747b75cb36de9fd52b98cb35b13a not found: ID does not exist" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.890805 4823 scope.go:117] "RemoveContainer" containerID="5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2" Jan 21 17:39:36 crc kubenswrapper[4823]: E0121 17:39:36.891237 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2\": container with ID starting with 5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2 not found: ID does not exist" containerID="5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.891326 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2"} err="failed to get container status \"5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2\": rpc error: code = NotFound desc = could not find container \"5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2\": container with ID starting with 5db128bb2861197fae47367da89ca4672cd656466d80b546f89edc9a556ed4e2 not found: ID does not exist" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.913793 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.937036 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.952897 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:36 crc kubenswrapper[4823]: E0121 17:39:36.953497 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-api" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.953524 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-api" Jan 21 17:39:36 crc kubenswrapper[4823]: E0121 17:39:36.953551 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-log" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.953561 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-log" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.953809 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-api" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.953904 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" containerName="nova-api-log" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.955538 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.957807 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.965034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.998104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-config-data\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.998164 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzg4\" (UniqueName: \"kubernetes.io/projected/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-kube-api-access-tvzg4\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.998315 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:36 crc kubenswrapper[4823]: I0121 17:39:36.998342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-logs\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.101550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.101609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-logs\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.101687 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-config-data\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.101709 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvzg4\" (UniqueName: \"kubernetes.io/projected/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-kube-api-access-tvzg4\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.102199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-logs\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.106451 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.107195 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-config-data\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.118777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvzg4\" (UniqueName: \"kubernetes.io/projected/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-kube-api-access-tvzg4\") pod \"nova-api-0\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.279138 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.357822 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3843ed64-c43c-492c-969b-11777369a972" path="/var/lib/kubelet/pods/3843ed64-c43c-492c-969b-11777369a972/volumes" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.358677 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24558b7-c596-47e8-8902-888664d7a7b0" path="/var/lib/kubelet/pods/b24558b7-c596-47e8-8902-888664d7a7b0/volumes" Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.739446 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.834847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bd250d7-7dc6-435f-acd6-77440e490b8e","Type":"ContainerStarted","Data":"b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc"} Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.834915 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bd250d7-7dc6-435f-acd6-77440e490b8e","Type":"ContainerStarted","Data":"3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb"} Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.837359 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a","Type":"ContainerStarted","Data":"5c6288528975517ef70f403db4f137f476d3cc65641ad227550574b9aed4dbcc"} Jan 21 17:39:37 crc kubenswrapper[4823]: I0121 17:39:37.857950 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8579305699999997 podStartE2EDuration="2.85793057s" podCreationTimestamp="2026-01-21 17:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:37.853219033 +0000 UTC m=+1378.779349903" watchObservedRunningTime="2026-01-21 17:39:37.85793057 +0000 UTC m=+1378.784061430" Jan 21 17:39:38 crc kubenswrapper[4823]: I0121 17:39:38.203432 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 17:39:38 crc kubenswrapper[4823]: I0121 17:39:38.849338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a","Type":"ContainerStarted","Data":"d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82"} Jan 21 17:39:38 crc kubenswrapper[4823]: I0121 17:39:38.849584 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a","Type":"ContainerStarted","Data":"79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084"} Jan 21 17:39:38 crc kubenswrapper[4823]: I0121 17:39:38.866917 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8668982830000003 podStartE2EDuration="2.866898283s" podCreationTimestamp="2026-01-21 17:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:39:38.865880587 +0000 UTC m=+1379.792011457" watchObservedRunningTime="2026-01-21 17:39:38.866898283 +0000 UTC m=+1379.793029143" Jan 21 17:39:39 crc kubenswrapper[4823]: I0121 17:39:39.532492 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 17:39:40 crc kubenswrapper[4823]: I0121 17:39:40.276082 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 17:39:41 crc kubenswrapper[4823]: I0121 17:39:41.267716 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 17:39:41 crc kubenswrapper[4823]: I0121 17:39:41.268123 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.128111 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.128579 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" containerName="kube-state-metrics" containerID="cri-o://471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9" gracePeriod=30 Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.720676 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.865016 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-756mp\" (UniqueName: \"kubernetes.io/projected/d465f8f2-aad6-47d6-8887-ee38d8b846ac-kube-api-access-756mp\") pod \"d465f8f2-aad6-47d6-8887-ee38d8b846ac\" (UID: \"d465f8f2-aad6-47d6-8887-ee38d8b846ac\") " Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.871194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d465f8f2-aad6-47d6-8887-ee38d8b846ac-kube-api-access-756mp" (OuterVolumeSpecName: "kube-api-access-756mp") pod "d465f8f2-aad6-47d6-8887-ee38d8b846ac" (UID: "d465f8f2-aad6-47d6-8887-ee38d8b846ac"). InnerVolumeSpecName "kube-api-access-756mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.902143 4823 generic.go:334] "Generic (PLEG): container finished" podID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" containerID="471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9" exitCode=2 Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.902208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d465f8f2-aad6-47d6-8887-ee38d8b846ac","Type":"ContainerDied","Data":"471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9"} Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.902271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d465f8f2-aad6-47d6-8887-ee38d8b846ac","Type":"ContainerDied","Data":"778e0f12536a7e7298554e71ef1289337da3bcba4dacbccd07d9f30e617687c5"} Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.902310 4823 scope.go:117] "RemoveContainer" containerID="471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.902218 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.937387 4823 scope.go:117] "RemoveContainer" containerID="471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9" Jan 21 17:39:43 crc kubenswrapper[4823]: E0121 17:39:43.937938 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9\": container with ID starting with 471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9 not found: ID does not exist" containerID="471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.937991 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9"} err="failed to get container status \"471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9\": rpc error: code = NotFound desc = could not find container \"471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9\": container with ID starting with 471853f2224c163b3910165fa4b2b08a893a9ac414f0e3e56c8101f4357e76e9 not found: ID does not exist" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.948744 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.966273 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.966929 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-756mp\" (UniqueName: \"kubernetes.io/projected/d465f8f2-aad6-47d6-8887-ee38d8b846ac-kube-api-access-756mp\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.978387 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:39:43 crc kubenswrapper[4823]: E0121 17:39:43.979199 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" containerName="kube-state-metrics" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.979237 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" containerName="kube-state-metrics" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.979612 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" containerName="kube-state-metrics" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.980831 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.983497 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.984234 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 17:39:43 crc kubenswrapper[4823]: I0121 17:39:43.989472 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.068663 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.069028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9g7\" (UniqueName: \"kubernetes.io/projected/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-api-access-mr9g7\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.069289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.069391 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.171443 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.171506 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.171559 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.171620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9g7\" (UniqueName: \"kubernetes.io/projected/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-api-access-mr9g7\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.176499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.180036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.192945 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f832894-8cfb-4f27-b494-21b6edc4516f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.197887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9g7\" (UniqueName: \"kubernetes.io/projected/0f832894-8cfb-4f27-b494-21b6edc4516f-kube-api-access-mr9g7\") pod \"kube-state-metrics-0\" (UID: \"0f832894-8cfb-4f27-b494-21b6edc4516f\") " pod="openstack/kube-state-metrics-0" Jan 21 17:39:44 crc kubenswrapper[4823]: I0121 17:39:44.311388 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:44.889396 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:44.916121 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0f832894-8cfb-4f27-b494-21b6edc4516f","Type":"ContainerStarted","Data":"02b81f50937553959c8b1d31078f55c590a0a771ec4d6ec02756431017839844"} Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.276186 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.292006 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.292250 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-central-agent" containerID="cri-o://cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89" gracePeriod=30 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.292684 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="proxy-httpd" containerID="cri-o://c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87" gracePeriod=30 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.292761 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="sg-core" containerID="cri-o://00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d" gracePeriod=30 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.292805 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-notification-agent" containerID="cri-o://5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983" gracePeriod=30 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.327982 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.361005 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d465f8f2-aad6-47d6-8887-ee38d8b846ac" path="/var/lib/kubelet/pods/d465f8f2-aad6-47d6-8887-ee38d8b846ac/volumes" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:45.955892 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.267614 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.267969 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.940345 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0f832894-8cfb-4f27-b494-21b6edc4516f","Type":"ContainerStarted","Data":"fc01e02e9e98519897c6607479a3a3783724a095c321a80a7fd2183debf19186"} Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.941008 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.945636 4823 generic.go:334] "Generic (PLEG): container finished" podID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerID="c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87" exitCode=0 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.945679 4823 generic.go:334] "Generic (PLEG): container finished" podID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerID="00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d" exitCode=2 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.945697 4823 generic.go:334] "Generic (PLEG): container finished" podID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerID="cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89" exitCode=0 Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.946773 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerDied","Data":"c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87"} Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.946824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerDied","Data":"00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d"} Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.946847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerDied","Data":"cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89"} Jan 21 17:39:46 crc kubenswrapper[4823]: I0121 17:39:46.977493 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.664226304 podStartE2EDuration="3.977468002s" podCreationTimestamp="2026-01-21 17:39:43 +0000 UTC" firstStartedPulling="2026-01-21 17:39:44.884633525 +0000 UTC m=+1385.810764385" lastFinishedPulling="2026-01-21 17:39:46.197875223 +0000 UTC m=+1387.124006083" observedRunningTime="2026-01-21 17:39:46.960305898 +0000 UTC m=+1387.886436788" watchObservedRunningTime="2026-01-21 17:39:46.977468002 +0000 UTC m=+1387.903598892" Jan 21 17:39:47 crc kubenswrapper[4823]: I0121 17:39:47.280048 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:39:47 crc kubenswrapper[4823]: I0121 17:39:47.280107 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:39:47 crc kubenswrapper[4823]: I0121 17:39:47.296106 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:39:47 crc kubenswrapper[4823]: I0121 17:39:47.296106 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.376086 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.376183 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.482746 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602248 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-run-httpd\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602412 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-config-data\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-scripts\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602497 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9r2p\" (UniqueName: \"kubernetes.io/projected/750bc099-66ce-41b7-9d5b-5e3872154ff8-kube-api-access-d9r2p\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-log-httpd\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602654 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-sg-core-conf-yaml\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.602783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-combined-ca-bundle\") pod \"750bc099-66ce-41b7-9d5b-5e3872154ff8\" (UID: \"750bc099-66ce-41b7-9d5b-5e3872154ff8\") " Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.604080 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.604621 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.609613 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750bc099-66ce-41b7-9d5b-5e3872154ff8-kube-api-access-d9r2p" (OuterVolumeSpecName: "kube-api-access-d9r2p") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "kube-api-access-d9r2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.619823 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-scripts" (OuterVolumeSpecName: "scripts") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.636405 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.700380 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.705531 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.705598 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9r2p\" (UniqueName: \"kubernetes.io/projected/750bc099-66ce-41b7-9d5b-5e3872154ff8-kube-api-access-d9r2p\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.705615 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.705627 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.705638 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.705649 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750bc099-66ce-41b7-9d5b-5e3872154ff8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.722337 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-config-data" (OuterVolumeSpecName: "config-data") pod "750bc099-66ce-41b7-9d5b-5e3872154ff8" (UID: "750bc099-66ce-41b7-9d5b-5e3872154ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.807267 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750bc099-66ce-41b7-9d5b-5e3872154ff8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.965935 4823 generic.go:334] "Generic (PLEG): container finished" podID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerID="5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983" exitCode=0 Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.965994 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.966014 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerDied","Data":"5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983"} Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.966410 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750bc099-66ce-41b7-9d5b-5e3872154ff8","Type":"ContainerDied","Data":"f12bb6305eff55b0554d303ec36801e95d7a0d62c9e678fbe7c9670fb10bb4e2"} Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.966430 4823 scope.go:117] "RemoveContainer" containerID="c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87" Jan 21 17:39:48 crc kubenswrapper[4823]: I0121 17:39:48.993511 4823 scope.go:117] "RemoveContainer" containerID="00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.018333 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.035580 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.043302 4823 scope.go:117] "RemoveContainer" containerID="5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.065458 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.065964 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="proxy-httpd" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.065981 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="proxy-httpd" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.065998 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-central-agent" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066006 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-central-agent" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.066036 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="sg-core" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066043 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="sg-core" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.066056 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-notification-agent" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066063 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-notification-agent" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066291 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-notification-agent" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066322 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="sg-core" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066341 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="ceilometer-central-agent" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.066353 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" containerName="proxy-httpd" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.068428 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.072815 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.073048 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.075065 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.077895 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.124609 4823 scope.go:117] "RemoveContainer" containerID="cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.141829 4823 scope.go:117] "RemoveContainer" containerID="c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.142287 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87\": container with ID starting with c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87 not found: ID does not exist" containerID="c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.142316 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87"} err="failed to get container status \"c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87\": rpc error: code = NotFound desc = could not find container \"c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87\": container with ID starting with c7625784d5f75c4ab251b7ee7231bacc4643888a2ef3c782e48865c694ffff87 not found: ID does not exist" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.142398 4823 scope.go:117] "RemoveContainer" containerID="00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.142692 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d\": container with ID starting with 00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d not found: ID does not exist" containerID="00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.142748 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d"} err="failed to get container status \"00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d\": rpc error: code = NotFound desc = could not find container \"00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d\": container with ID starting with 00938e750056512dc0ae93de2aebe615664039f6e2aeb470cb2ef8c5794c982d not found: ID does not exist" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.142777 4823 scope.go:117] "RemoveContainer" containerID="5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.143287 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983\": container with ID starting with 5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983 not found: ID does not exist" containerID="5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.143317 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983"} err="failed to get container status \"5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983\": rpc error: code = NotFound desc = could not find container \"5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983\": container with ID starting with 5ae50d3d65abbcbf78d800a6c697ef25bcbcb679faf0e4b4b28325d9f741b983 not found: ID does not exist" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.143334 4823 scope.go:117] "RemoveContainer" containerID="cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89" Jan 21 17:39:49 crc kubenswrapper[4823]: E0121 17:39:49.143633 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89\": container with ID starting with cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89 not found: ID does not exist" containerID="cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.143660 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89"} err="failed to get container status \"cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89\": rpc error: code = NotFound desc = could not find container \"cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89\": container with ID starting with cd4596429bf6ff250ed6ffe30076bb1b75a7a05f77740c2cdb94cbfbe4865d89 not found: ID does not exist" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-log-httpd\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215361 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-scripts\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-config-data\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215500 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45kk\" (UniqueName: \"kubernetes.io/projected/9e390c72-b32b-4647-811d-38fbe8a87d9e-kube-api-access-c45kk\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215526 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.215559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-run-httpd\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.317686 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-log-httpd\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.318155 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.318227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-log-httpd\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.318523 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.318777 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-scripts\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.318987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-config-data\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.319188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45kk\" (UniqueName: \"kubernetes.io/projected/9e390c72-b32b-4647-811d-38fbe8a87d9e-kube-api-access-c45kk\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.319448 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.322982 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-scripts\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.323694 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-run-httpd\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.324505 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-run-httpd\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.324543 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.325474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-config-data\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.327778 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.338119 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.341835 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45kk\" (UniqueName: \"kubernetes.io/projected/9e390c72-b32b-4647-811d-38fbe8a87d9e-kube-api-access-c45kk\") pod \"ceilometer-0\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.356878 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750bc099-66ce-41b7-9d5b-5e3872154ff8" path="/var/lib/kubelet/pods/750bc099-66ce-41b7-9d5b-5e3872154ff8/volumes" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.424459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.931217 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:49 crc kubenswrapper[4823]: I0121 17:39:49.980133 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerStarted","Data":"555ed036162d560290afd77e9e793fc25f2394fdb8b82ff7ea9fc8b072fc693f"} Jan 21 17:39:50 crc kubenswrapper[4823]: I0121 17:39:50.994687 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerStarted","Data":"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25"} Jan 21 17:39:52 crc kubenswrapper[4823]: I0121 17:39:52.005528 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerStarted","Data":"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32"} Jan 21 17:39:53 crc kubenswrapper[4823]: I0121 17:39:53.018063 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerStarted","Data":"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c"} Jan 21 17:39:54 crc kubenswrapper[4823]: I0121 17:39:54.318751 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 17:39:55 crc kubenswrapper[4823]: I0121 17:39:55.041059 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerStarted","Data":"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633"} Jan 21 17:39:55 crc kubenswrapper[4823]: I0121 17:39:55.041250 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:39:55 crc kubenswrapper[4823]: I0121 17:39:55.072488 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.268901693 podStartE2EDuration="6.072464778s" podCreationTimestamp="2026-01-21 17:39:49 +0000 UTC" firstStartedPulling="2026-01-21 17:39:49.931442856 +0000 UTC m=+1390.857573716" lastFinishedPulling="2026-01-21 17:39:53.735005941 +0000 UTC m=+1394.661136801" observedRunningTime="2026-01-21 17:39:55.065768702 +0000 UTC m=+1395.991899582" watchObservedRunningTime="2026-01-21 17:39:55.072464778 +0000 UTC m=+1395.998595638" Jan 21 17:39:56 crc kubenswrapper[4823]: I0121 17:39:56.616171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 17:39:56 crc kubenswrapper[4823]: I0121 17:39:56.616736 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 17:39:56 crc kubenswrapper[4823]: I0121 17:39:56.622778 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.062249 4823 generic.go:334] "Generic (PLEG): container finished" podID="83ca2470-f750-4500-9e1c-0e96383fd1ca" containerID="eaefc5586c43d70f4e94b669b4a9b80639b39c77fb44e8ef98c8ea1c6e88fa77" exitCode=137 Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.062658 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ca2470-f750-4500-9e1c-0e96383fd1ca","Type":"ContainerDied","Data":"eaefc5586c43d70f4e94b669b4a9b80639b39c77fb44e8ef98c8ea1c6e88fa77"} Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.062819 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ca2470-f750-4500-9e1c-0e96383fd1ca","Type":"ContainerDied","Data":"0eb7eed396378d22810574f792a03797402700ba1c7b5c8e4d6fc4c15d8feadf"} Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.062845 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb7eed396378d22810574f792a03797402700ba1c7b5c8e4d6fc4c15d8feadf" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.063577 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.070823 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.270638 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-config-data\") pod \"83ca2470-f750-4500-9e1c-0e96383fd1ca\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.270796 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl8fb\" (UniqueName: \"kubernetes.io/projected/83ca2470-f750-4500-9e1c-0e96383fd1ca-kube-api-access-rl8fb\") pod \"83ca2470-f750-4500-9e1c-0e96383fd1ca\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.270827 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-combined-ca-bundle\") pod \"83ca2470-f750-4500-9e1c-0e96383fd1ca\" (UID: \"83ca2470-f750-4500-9e1c-0e96383fd1ca\") " Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.278215 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ca2470-f750-4500-9e1c-0e96383fd1ca-kube-api-access-rl8fb" (OuterVolumeSpecName: "kube-api-access-rl8fb") pod "83ca2470-f750-4500-9e1c-0e96383fd1ca" (UID: "83ca2470-f750-4500-9e1c-0e96383fd1ca"). InnerVolumeSpecName "kube-api-access-rl8fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.285039 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.285098 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.286028 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.286080 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.289875 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.301692 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.312196 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83ca2470-f750-4500-9e1c-0e96383fd1ca" (UID: "83ca2470-f750-4500-9e1c-0e96383fd1ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.314762 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-config-data" (OuterVolumeSpecName: "config-data") pod "83ca2470-f750-4500-9e1c-0e96383fd1ca" (UID: "83ca2470-f750-4500-9e1c-0e96383fd1ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.373168 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl8fb\" (UniqueName: \"kubernetes.io/projected/83ca2470-f750-4500-9e1c-0e96383fd1ca-kube-api-access-rl8fb\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.373246 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.373347 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca2470-f750-4500-9e1c-0e96383fd1ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.490319 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kd7pm"] Jan 21 17:39:57 crc kubenswrapper[4823]: E0121 17:39:57.490866 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ca2470-f750-4500-9e1c-0e96383fd1ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.490887 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ca2470-f750-4500-9e1c-0e96383fd1ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.491128 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ca2470-f750-4500-9e1c-0e96383fd1ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.492521 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.504679 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kd7pm"] Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.577730 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4fp\" (UniqueName: \"kubernetes.io/projected/0b62a6b8-398b-429c-b074-9e3db44b8449-kube-api-access-kn4fp\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.577797 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.577872 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-config\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.577897 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.577927 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.578150 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.680120 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.680209 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.680540 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4fp\" (UniqueName: \"kubernetes.io/projected/0b62a6b8-398b-429c-b074-9e3db44b8449-kube-api-access-kn4fp\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.680582 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.680610 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-config\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.680626 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.681356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.681462 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.681605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.681961 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.682254 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-config\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.703839 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4fp\" (UniqueName: \"kubernetes.io/projected/0b62a6b8-398b-429c-b074-9e3db44b8449-kube-api-access-kn4fp\") pod \"dnsmasq-dns-cd5cbd7b9-kd7pm\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:57 crc kubenswrapper[4823]: I0121 17:39:57.812660 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.075133 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.111067 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.136924 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.152022 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.153481 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.156759 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.156937 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.157034 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.164262 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.191498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshkh\" (UniqueName: \"kubernetes.io/projected/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-kube-api-access-jshkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.191551 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.191616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.191636 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.191679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.292669 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.292753 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.292775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.292808 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.293113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshkh\" (UniqueName: \"kubernetes.io/projected/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-kube-api-access-jshkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.298718 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.298904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.298919 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.309959 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.330562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshkh\" (UniqueName: \"kubernetes.io/projected/61085eff-9999-4d7e-b8a2-a1a548aa4cd6-kube-api-access-jshkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"61085eff-9999-4d7e-b8a2-a1a548aa4cd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:58 crc kubenswrapper[4823]: W0121 17:39:58.341249 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62a6b8_398b_429c_b074_9e3db44b8449.slice/crio-43b23b08929b868699110fb395e3cb9abf8c0d917ef778939bd6d90a3e00bd73 WatchSource:0}: Error finding container 43b23b08929b868699110fb395e3cb9abf8c0d917ef778939bd6d90a3e00bd73: Status 404 returned error can't find the container with id 43b23b08929b868699110fb395e3cb9abf8c0d917ef778939bd6d90a3e00bd73 Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.359412 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kd7pm"] Jan 21 17:39:58 crc kubenswrapper[4823]: I0121 17:39:58.499363 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.086202 4823 generic.go:334] "Generic (PLEG): container finished" podID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerID="3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b" exitCode=0 Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.088491 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" event={"ID":"0b62a6b8-398b-429c-b074-9e3db44b8449","Type":"ContainerDied","Data":"3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b"} Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.088535 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" event={"ID":"0b62a6b8-398b-429c-b074-9e3db44b8449","Type":"ContainerStarted","Data":"43b23b08929b868699110fb395e3cb9abf8c0d917ef778939bd6d90a3e00bd73"} Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.099109 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.357926 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ca2470-f750-4500-9e1c-0e96383fd1ca" path="/var/lib/kubelet/pods/83ca2470-f750-4500-9e1c-0e96383fd1ca/volumes" Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.844149 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.845320 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="sg-core" containerID="cri-o://cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c" gracePeriod=30 Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.845462 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="proxy-httpd" containerID="cri-o://6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633" gracePeriod=30 Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.845546 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-notification-agent" containerID="cri-o://331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32" gracePeriod=30 Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.850316 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-central-agent" containerID="cri-o://bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25" gracePeriod=30 Jan 21 17:39:59 crc kubenswrapper[4823]: I0121 17:39:59.906984 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.102756 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61085eff-9999-4d7e-b8a2-a1a548aa4cd6","Type":"ContainerStarted","Data":"8ccc319c49bbb82a3089a06e0023b52057073e81d77687a934eb34187571bafd"} Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.102798 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61085eff-9999-4d7e-b8a2-a1a548aa4cd6","Type":"ContainerStarted","Data":"080b2e3a1c32f08d84993f62cbef3cb4aa5154eee60f07dcc516a21073458dd9"} Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.106435 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" event={"ID":"0b62a6b8-398b-429c-b074-9e3db44b8449","Type":"ContainerStarted","Data":"0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006"} Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.106838 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.113795 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerID="6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633" exitCode=0 Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.113821 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerID="cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c" exitCode=2 Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.114051 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-log" containerID="cri-o://79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084" gracePeriod=30 Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.114242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerDied","Data":"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633"} Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.114263 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerDied","Data":"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c"} Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.114307 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-api" containerID="cri-o://d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82" gracePeriod=30 Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.133043 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.133025039 podStartE2EDuration="2.133025039s" podCreationTimestamp="2026-01-21 17:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:00.131064841 +0000 UTC m=+1401.057195711" watchObservedRunningTime="2026-01-21 17:40:00.133025039 +0000 UTC m=+1401.059155899" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.161578 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" podStartSLOduration=3.161560315 podStartE2EDuration="3.161560315s" podCreationTimestamp="2026-01-21 17:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:00.158882259 +0000 UTC m=+1401.085013119" watchObservedRunningTime="2026-01-21 17:40:00.161560315 +0000 UTC m=+1401.087691175" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.639237 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.755760 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-scripts\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.755895 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-ceilometer-tls-certs\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.755962 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-sg-core-conf-yaml\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.756000 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-config-data\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.756050 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c45kk\" (UniqueName: \"kubernetes.io/projected/9e390c72-b32b-4647-811d-38fbe8a87d9e-kube-api-access-c45kk\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.756113 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-run-httpd\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.756160 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-combined-ca-bundle\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.756222 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-log-httpd\") pod \"9e390c72-b32b-4647-811d-38fbe8a87d9e\" (UID: \"9e390c72-b32b-4647-811d-38fbe8a87d9e\") " Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.756921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.757920 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.763999 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-scripts" (OuterVolumeSpecName: "scripts") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.767050 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e390c72-b32b-4647-811d-38fbe8a87d9e-kube-api-access-c45kk" (OuterVolumeSpecName: "kube-api-access-c45kk") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "kube-api-access-c45kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.809680 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.840695 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.859509 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.859542 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.859551 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c45kk\" (UniqueName: \"kubernetes.io/projected/9e390c72-b32b-4647-811d-38fbe8a87d9e-kube-api-access-c45kk\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.859581 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.859590 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e390c72-b32b-4647-811d-38fbe8a87d9e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.859598 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.892476 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.896025 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-config-data" (OuterVolumeSpecName: "config-data") pod "9e390c72-b32b-4647-811d-38fbe8a87d9e" (UID: "9e390c72-b32b-4647-811d-38fbe8a87d9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.961501 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:00 crc kubenswrapper[4823]: I0121 17:40:00.961543 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e390c72-b32b-4647-811d-38fbe8a87d9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.138637 4823 generic.go:334] "Generic (PLEG): container finished" podID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerID="79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084" exitCode=143 Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.138682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a","Type":"ContainerDied","Data":"79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084"} Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142560 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerID="331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32" exitCode=0 Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142602 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerID="bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25" exitCode=0 Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142645 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142652 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerDied","Data":"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32"} Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142691 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerDied","Data":"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25"} Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142702 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e390c72-b32b-4647-811d-38fbe8a87d9e","Type":"ContainerDied","Data":"555ed036162d560290afd77e9e793fc25f2394fdb8b82ff7ea9fc8b072fc693f"} Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.142718 4823 scope.go:117] "RemoveContainer" containerID="6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.170482 4823 scope.go:117] "RemoveContainer" containerID="cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.191074 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.206521 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.216169 4823 scope.go:117] "RemoveContainer" containerID="331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.223377 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.223842 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="proxy-httpd" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.223924 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="proxy-httpd" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.223936 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-central-agent" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.223942 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-central-agent" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.223956 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="sg-core" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.223964 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="sg-core" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.223994 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-notification-agent" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.224001 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-notification-agent" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.224176 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="sg-core" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.224185 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-notification-agent" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.224196 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="proxy-httpd" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.224229 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" containerName="ceilometer-central-agent" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.226051 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.229376 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.229592 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.229828 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.233508 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.259752 4823 scope.go:117] "RemoveContainer" containerID="bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.288041 4823 scope.go:117] "RemoveContainer" containerID="6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.290571 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633\": container with ID starting with 6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633 not found: ID does not exist" containerID="6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.290623 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633"} err="failed to get container status \"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633\": rpc error: code = NotFound desc = could not find container \"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633\": container with ID starting with 6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633 not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.290647 4823 scope.go:117] "RemoveContainer" containerID="cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.291082 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c\": container with ID starting with cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c not found: ID does not exist" containerID="cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.291134 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c"} err="failed to get container status \"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c\": rpc error: code = NotFound desc = could not find container \"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c\": container with ID starting with cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.291274 4823 scope.go:117] "RemoveContainer" containerID="331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.291688 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32\": container with ID starting with 331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32 not found: ID does not exist" containerID="331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.291730 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32"} err="failed to get container status \"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32\": rpc error: code = NotFound desc = could not find container \"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32\": container with ID starting with 331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32 not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.291758 4823 scope.go:117] "RemoveContainer" containerID="bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25" Jan 21 17:40:01 crc kubenswrapper[4823]: E0121 17:40:01.292325 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25\": container with ID starting with bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25 not found: ID does not exist" containerID="bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.292350 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25"} err="failed to get container status \"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25\": rpc error: code = NotFound desc = could not find container \"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25\": container with ID starting with bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25 not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.292364 4823 scope.go:117] "RemoveContainer" containerID="6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.292586 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633"} err="failed to get container status \"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633\": rpc error: code = NotFound desc = could not find container \"6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633\": container with ID starting with 6414a7b7c985ec622f38c5320d09567dcb231267e9e70662f0473189d5665633 not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.292610 4823 scope.go:117] "RemoveContainer" containerID="cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.293454 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c"} err="failed to get container status \"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c\": rpc error: code = NotFound desc = could not find container \"cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c\": container with ID starting with cc81dee2b66623265f39ad80636229d01f1433db54727775be9c42ef7eec044c not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.293496 4823 scope.go:117] "RemoveContainer" containerID="331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.293718 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32"} err="failed to get container status \"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32\": rpc error: code = NotFound desc = could not find container \"331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32\": container with ID starting with 331a7cd0d37d6598fe69754b70fc06222605fcb3cc290eaadb29a2315f9d8c32 not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.293767 4823 scope.go:117] "RemoveContainer" containerID="bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.294117 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25"} err="failed to get container status \"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25\": rpc error: code = NotFound desc = could not find container \"bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25\": container with ID starting with bfc59a5949cda50bc92c7899deda7c703d2fe93ca05ef587d9704d280a187d25 not found: ID does not exist" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.358320 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e390c72-b32b-4647-811d-38fbe8a87d9e" path="/var/lib/kubelet/pods/9e390c72-b32b-4647-811d-38fbe8a87d9e/volumes" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.369596 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-config-data\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.369691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptgc\" (UniqueName: \"kubernetes.io/projected/7718a42b-8064-4db6-abc1-35a9d9954479-kube-api-access-9ptgc\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.369762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.369927 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-log-httpd\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.370114 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.370183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-scripts\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.370282 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.370373 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-run-httpd\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.472011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.472199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-log-httpd\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.472761 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-log-httpd\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.472807 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.472841 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-scripts\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.473354 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.473466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-run-httpd\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.473731 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-config-data\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.473775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ptgc\" (UniqueName: \"kubernetes.io/projected/7718a42b-8064-4db6-abc1-35a9d9954479-kube-api-access-9ptgc\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.473958 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-run-httpd\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.478221 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.478472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.478813 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-config-data\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.481040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-scripts\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.493419 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.494235 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ptgc\" (UniqueName: \"kubernetes.io/projected/7718a42b-8064-4db6-abc1-35a9d9954479-kube-api-access-9ptgc\") pod \"ceilometer-0\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " pod="openstack/ceilometer-0" Jan 21 17:40:01 crc kubenswrapper[4823]: I0121 17:40:01.546399 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:02 crc kubenswrapper[4823]: I0121 17:40:02.013958 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:02 crc kubenswrapper[4823]: W0121 17:40:02.043879 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7718a42b_8064_4db6_abc1_35a9d9954479.slice/crio-d4538e342505ac0bfe9e764e4c90543e90827a683678bea52873b8f3826f34bc WatchSource:0}: Error finding container d4538e342505ac0bfe9e764e4c90543e90827a683678bea52873b8f3826f34bc: Status 404 returned error can't find the container with id d4538e342505ac0bfe9e764e4c90543e90827a683678bea52873b8f3826f34bc Jan 21 17:40:02 crc kubenswrapper[4823]: I0121 17:40:02.043932 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:02 crc kubenswrapper[4823]: I0121 17:40:02.153064 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerStarted","Data":"d4538e342505ac0bfe9e764e4c90543e90827a683678bea52873b8f3826f34bc"} Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.168073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerStarted","Data":"4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955"} Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.501219 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.713920 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.830670 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-logs\") pod \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.830762 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-config-data\") pod \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.830814 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvzg4\" (UniqueName: \"kubernetes.io/projected/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-kube-api-access-tvzg4\") pod \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.830948 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-combined-ca-bundle\") pod \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\" (UID: \"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a\") " Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.831451 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-logs" (OuterVolumeSpecName: "logs") pod "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" (UID: "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.831595 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.841060 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-kube-api-access-tvzg4" (OuterVolumeSpecName: "kube-api-access-tvzg4") pod "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" (UID: "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a"). InnerVolumeSpecName "kube-api-access-tvzg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.873995 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-config-data" (OuterVolumeSpecName: "config-data") pod "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" (UID: "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.895032 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" (UID: "ce4fa68f-477c-49a7-a5f0-9a2c0c04876a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.937551 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.937602 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvzg4\" (UniqueName: \"kubernetes.io/projected/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-kube-api-access-tvzg4\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:03 crc kubenswrapper[4823]: I0121 17:40:03.937618 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.187540 4823 generic.go:334] "Generic (PLEG): container finished" podID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerID="d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82" exitCode=0 Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.187595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a","Type":"ContainerDied","Data":"d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82"} Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.187621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce4fa68f-477c-49a7-a5f0-9a2c0c04876a","Type":"ContainerDied","Data":"5c6288528975517ef70f403db4f137f476d3cc65641ad227550574b9aed4dbcc"} Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.187638 4823 scope.go:117] "RemoveContainer" containerID="d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.187755 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.193298 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerStarted","Data":"57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b"} Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.236868 4823 scope.go:117] "RemoveContainer" containerID="79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.252647 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.263209 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.269321 4823 scope.go:117] "RemoveContainer" containerID="d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82" Jan 21 17:40:04 crc kubenswrapper[4823]: E0121 17:40:04.270256 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82\": container with ID starting with d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82 not found: ID does not exist" containerID="d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.270299 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82"} err="failed to get container status \"d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82\": rpc error: code = NotFound desc = could not find container \"d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82\": container with ID starting with d8b07457578d848cfb571061a58df5aa9ed51d451028c2e01698470898566a82 not found: ID does not exist" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.270329 4823 scope.go:117] "RemoveContainer" containerID="79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084" Jan 21 17:40:04 crc kubenswrapper[4823]: E0121 17:40:04.271304 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084\": container with ID starting with 79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084 not found: ID does not exist" containerID="79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.271327 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084"} err="failed to get container status \"79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084\": rpc error: code = NotFound desc = could not find container \"79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084\": container with ID starting with 79f8ffd2ac15aa2f12cc364937bd611cb6f717e81e6c7552fb3b2db520a45084 not found: ID does not exist" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.282378 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:04 crc kubenswrapper[4823]: E0121 17:40:04.282807 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-api" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.282824 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-api" Jan 21 17:40:04 crc kubenswrapper[4823]: E0121 17:40:04.282844 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-log" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.282864 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-log" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.283051 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-api" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.283077 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" containerName="nova-api-log" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.284040 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.287006 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.287027 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.287043 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.298079 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.346370 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.346407 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.346438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-config-data\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.346458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lf8p\" (UniqueName: \"kubernetes.io/projected/3c606182-0ada-4745-8d1f-97c6c30c2255-kube-api-access-9lf8p\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.346625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-public-tls-certs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.346800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c606182-0ada-4745-8d1f-97c6c30c2255-logs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.449003 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-public-tls-certs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.449086 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c606182-0ada-4745-8d1f-97c6c30c2255-logs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.449174 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.449194 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.449209 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-config-data\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.449226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lf8p\" (UniqueName: \"kubernetes.io/projected/3c606182-0ada-4745-8d1f-97c6c30c2255-kube-api-access-9lf8p\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.450316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c606182-0ada-4745-8d1f-97c6c30c2255-logs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.453735 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.457617 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.461335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-public-tls-certs\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.466037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-config-data\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.472748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lf8p\" (UniqueName: \"kubernetes.io/projected/3c606182-0ada-4745-8d1f-97c6c30c2255-kube-api-access-9lf8p\") pod \"nova-api-0\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " pod="openstack/nova-api-0" Jan 21 17:40:04 crc kubenswrapper[4823]: I0121 17:40:04.606655 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:05 crc kubenswrapper[4823]: I0121 17:40:05.128691 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:05 crc kubenswrapper[4823]: I0121 17:40:05.206649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c606182-0ada-4745-8d1f-97c6c30c2255","Type":"ContainerStarted","Data":"80fc775343dde305b4f52b99d7d5b88691a7631c80e9363e4ba197df209c7eb8"} Jan 21 17:40:05 crc kubenswrapper[4823]: I0121 17:40:05.209525 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerStarted","Data":"fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440"} Jan 21 17:40:05 crc kubenswrapper[4823]: I0121 17:40:05.356751 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4fa68f-477c-49a7-a5f0-9a2c0c04876a" path="/var/lib/kubelet/pods/ce4fa68f-477c-49a7-a5f0-9a2c0c04876a/volumes" Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.221896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c606182-0ada-4745-8d1f-97c6c30c2255","Type":"ContainerStarted","Data":"be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091"} Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.222229 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c606182-0ada-4745-8d1f-97c6c30c2255","Type":"ContainerStarted","Data":"6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f"} Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.231729 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerStarted","Data":"9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11"} Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.232084 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-central-agent" containerID="cri-o://4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955" gracePeriod=30 Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.232141 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.232225 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-notification-agent" containerID="cri-o://57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b" gracePeriod=30 Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.232222 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="sg-core" containerID="cri-o://fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440" gracePeriod=30 Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.232313 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="proxy-httpd" containerID="cri-o://9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11" gracePeriod=30 Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.254748 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.254727893 podStartE2EDuration="2.254727893s" podCreationTimestamp="2026-01-21 17:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:06.248285684 +0000 UTC m=+1407.174416584" watchObservedRunningTime="2026-01-21 17:40:06.254727893 +0000 UTC m=+1407.180858753" Jan 21 17:40:06 crc kubenswrapper[4823]: I0121 17:40:06.295117 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6069606410000001 podStartE2EDuration="5.295087561s" podCreationTimestamp="2026-01-21 17:40:01 +0000 UTC" firstStartedPulling="2026-01-21 17:40:02.047341681 +0000 UTC m=+1402.973472531" lastFinishedPulling="2026-01-21 17:40:05.735468601 +0000 UTC m=+1406.661599451" observedRunningTime="2026-01-21 17:40:06.271152629 +0000 UTC m=+1407.197283499" watchObservedRunningTime="2026-01-21 17:40:06.295087561 +0000 UTC m=+1407.221218421" Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.248279 4823 generic.go:334] "Generic (PLEG): container finished" podID="7718a42b-8064-4db6-abc1-35a9d9954479" containerID="9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11" exitCode=0 Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.248649 4823 generic.go:334] "Generic (PLEG): container finished" podID="7718a42b-8064-4db6-abc1-35a9d9954479" containerID="fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440" exitCode=2 Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.248663 4823 generic.go:334] "Generic (PLEG): container finished" podID="7718a42b-8064-4db6-abc1-35a9d9954479" containerID="57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b" exitCode=0 Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.248346 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerDied","Data":"9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11"} Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.248770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerDied","Data":"fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440"} Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.248802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerDied","Data":"57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b"} Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.814159 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.906914 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-nj8rj"] Jan 21 17:40:07 crc kubenswrapper[4823]: I0121 17:40:07.907776 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerName="dnsmasq-dns" containerID="cri-o://ff45895a391b5d30a8161fa7d5444301c9afd4a451346899b1ce655582fcc7f8" gracePeriod=10 Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.262628 4823 generic.go:334] "Generic (PLEG): container finished" podID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerID="ff45895a391b5d30a8161fa7d5444301c9afd4a451346899b1ce655582fcc7f8" exitCode=0 Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.263175 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" event={"ID":"e71ed82f-a626-4fad-b864-da1b1ff313b9","Type":"ContainerDied","Data":"ff45895a391b5d30a8161fa7d5444301c9afd4a451346899b1ce655582fcc7f8"} Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.458967 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.500847 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.529473 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.549433 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-config\") pod \"e71ed82f-a626-4fad-b864-da1b1ff313b9\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.549538 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-svc\") pod \"e71ed82f-a626-4fad-b864-da1b1ff313b9\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.549609 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-swift-storage-0\") pod \"e71ed82f-a626-4fad-b864-da1b1ff313b9\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.549704 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-nb\") pod \"e71ed82f-a626-4fad-b864-da1b1ff313b9\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.549771 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-sb\") pod \"e71ed82f-a626-4fad-b864-da1b1ff313b9\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.549816 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8zl6\" (UniqueName: \"kubernetes.io/projected/e71ed82f-a626-4fad-b864-da1b1ff313b9-kube-api-access-c8zl6\") pod \"e71ed82f-a626-4fad-b864-da1b1ff313b9\" (UID: \"e71ed82f-a626-4fad-b864-da1b1ff313b9\") " Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.560743 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71ed82f-a626-4fad-b864-da1b1ff313b9-kube-api-access-c8zl6" (OuterVolumeSpecName: "kube-api-access-c8zl6") pod "e71ed82f-a626-4fad-b864-da1b1ff313b9" (UID: "e71ed82f-a626-4fad-b864-da1b1ff313b9"). InnerVolumeSpecName "kube-api-access-c8zl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.620441 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e71ed82f-a626-4fad-b864-da1b1ff313b9" (UID: "e71ed82f-a626-4fad-b864-da1b1ff313b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.674283 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.674318 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8zl6\" (UniqueName: \"kubernetes.io/projected/e71ed82f-a626-4fad-b864-da1b1ff313b9-kube-api-access-c8zl6\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.680614 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e71ed82f-a626-4fad-b864-da1b1ff313b9" (UID: "e71ed82f-a626-4fad-b864-da1b1ff313b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.704360 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e71ed82f-a626-4fad-b864-da1b1ff313b9" (UID: "e71ed82f-a626-4fad-b864-da1b1ff313b9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.719543 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e71ed82f-a626-4fad-b864-da1b1ff313b9" (UID: "e71ed82f-a626-4fad-b864-da1b1ff313b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.764360 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-config" (OuterVolumeSpecName: "config") pod "e71ed82f-a626-4fad-b864-da1b1ff313b9" (UID: "e71ed82f-a626-4fad-b864-da1b1ff313b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.776147 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.776183 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.776192 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:08 crc kubenswrapper[4823]: I0121 17:40:08.776201 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e71ed82f-a626-4fad-b864-da1b1ff313b9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.012844 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.081635 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ptgc\" (UniqueName: \"kubernetes.io/projected/7718a42b-8064-4db6-abc1-35a9d9954479-kube-api-access-9ptgc\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082052 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-config-data\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082287 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-ceilometer-tls-certs\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082347 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-sg-core-conf-yaml\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082443 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-combined-ca-bundle\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082500 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-scripts\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082737 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-run-httpd\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.082777 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-log-httpd\") pod \"7718a42b-8064-4db6-abc1-35a9d9954479\" (UID: \"7718a42b-8064-4db6-abc1-35a9d9954479\") " Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.084366 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.084607 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.087346 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7718a42b-8064-4db6-abc1-35a9d9954479-kube-api-access-9ptgc" (OuterVolumeSpecName: "kube-api-access-9ptgc") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "kube-api-access-9ptgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.088725 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-scripts" (OuterVolumeSpecName: "scripts") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.119147 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.143697 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.187306 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.187350 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.187363 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7718a42b-8064-4db6-abc1-35a9d9954479-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.187374 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ptgc\" (UniqueName: \"kubernetes.io/projected/7718a42b-8064-4db6-abc1-35a9d9954479-kube-api-access-9ptgc\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.187390 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.187401 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.190942 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.209113 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-config-data" (OuterVolumeSpecName: "config-data") pod "7718a42b-8064-4db6-abc1-35a9d9954479" (UID: "7718a42b-8064-4db6-abc1-35a9d9954479"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.278152 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.278154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-nj8rj" event={"ID":"e71ed82f-a626-4fad-b864-da1b1ff313b9","Type":"ContainerDied","Data":"bb3b0e3f4ab5e63579ff76dac8d1013ffe2d5dcba7498d502657c9176c152aea"} Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.278246 4823 scope.go:117] "RemoveContainer" containerID="ff45895a391b5d30a8161fa7d5444301c9afd4a451346899b1ce655582fcc7f8" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.283712 4823 generic.go:334] "Generic (PLEG): container finished" podID="7718a42b-8064-4db6-abc1-35a9d9954479" containerID="4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955" exitCode=0 Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.283949 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.284028 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerDied","Data":"4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955"} Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.284116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7718a42b-8064-4db6-abc1-35a9d9954479","Type":"ContainerDied","Data":"d4538e342505ac0bfe9e764e4c90543e90827a683678bea52873b8f3826f34bc"} Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.289729 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.289763 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7718a42b-8064-4db6-abc1-35a9d9954479-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.306578 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.317561 4823 scope.go:117] "RemoveContainer" containerID="493ebb5d778f47a5ce5a0ba7f04ee97e663aec64f26568eb1e8cd1f661657852" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.330739 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-nj8rj"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.366432 4823 scope.go:117] "RemoveContainer" containerID="9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.425005 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-nj8rj"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.431740 4823 scope.go:117] "RemoveContainer" containerID="fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.470351 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.476757 4823 scope.go:117] "RemoveContainer" containerID="57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.478828 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.491012 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.491725 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerName="init" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.491744 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerName="init" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.491761 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerName="dnsmasq-dns" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.491768 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerName="dnsmasq-dns" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.491779 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="sg-core" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.491787 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="sg-core" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.491807 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-central-agent" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.491813 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-central-agent" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.491841 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-notification-agent" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.491846 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-notification-agent" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.492469 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="proxy-httpd" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.492485 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="proxy-httpd" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.492720 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="sg-core" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.492736 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-central-agent" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.492747 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="ceilometer-notification-agent" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.492754 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" containerName="dnsmasq-dns" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.492769 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" containerName="proxy-httpd" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.495973 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.498698 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.499146 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.500193 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.500920 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.513442 4823 scope.go:117] "RemoveContainer" containerID="4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.535733 4823 scope.go:117] "RemoveContainer" containerID="9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.536304 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11\": container with ID starting with 9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11 not found: ID does not exist" containerID="9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.536337 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11"} err="failed to get container status \"9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11\": rpc error: code = NotFound desc = could not find container \"9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11\": container with ID starting with 9b8d6afa3ce8fab5c38aee05f2057e3b2806861d323e0b41edc37e4425f63b11 not found: ID does not exist" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.536361 4823 scope.go:117] "RemoveContainer" containerID="fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.536557 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440\": container with ID starting with fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440 not found: ID does not exist" containerID="fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.536580 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440"} err="failed to get container status \"fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440\": rpc error: code = NotFound desc = could not find container \"fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440\": container with ID starting with fd7bab83948762ce5a3e24b78411711632898221a45a6dd592d250018f863440 not found: ID does not exist" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.536594 4823 scope.go:117] "RemoveContainer" containerID="57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.536755 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b\": container with ID starting with 57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b not found: ID does not exist" containerID="57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.536775 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b"} err="failed to get container status \"57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b\": rpc error: code = NotFound desc = could not find container \"57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b\": container with ID starting with 57918e448a523d20f00954dc954c32cdd021bd8dcb23800fd590c6e9964b0f6b not found: ID does not exist" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.536790 4823 scope.go:117] "RemoveContainer" containerID="4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955" Jan 21 17:40:09 crc kubenswrapper[4823]: E0121 17:40:09.536991 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955\": container with ID starting with 4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955 not found: ID does not exist" containerID="4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.537020 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955"} err="failed to get container status \"4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955\": rpc error: code = NotFound desc = could not find container \"4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955\": container with ID starting with 4ae47d482d2873fc45a8698ca7f5c6a5cb193dbb3171d6f5f4b8cbe2d0776955 not found: ID does not exist" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.581382 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-khj8v"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.582628 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.585694 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.585842 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.594409 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-khj8v"] Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-config-data\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6jj\" (UniqueName: \"kubernetes.io/projected/f5aebb1a-417f-4064-963c-1331aaf0f63b-kube-api-access-wd6jj\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604253 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5aebb1a-417f-4064-963c-1331aaf0f63b-run-httpd\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604291 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604309 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604426 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5aebb1a-417f-4064-963c-1331aaf0f63b-log-httpd\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604460 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.604480 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-scripts\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.709418 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.709803 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-scripts\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.709921 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.709962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-config-data\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-config-data\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710186 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd6jj\" (UniqueName: \"kubernetes.io/projected/f5aebb1a-417f-4064-963c-1331aaf0f63b-kube-api-access-wd6jj\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710236 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-scripts\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710266 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5aebb1a-417f-4064-963c-1331aaf0f63b-run-httpd\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710402 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710439 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhwn\" (UniqueName: \"kubernetes.io/projected/b3843c17-8569-45a9-af71-95e31515609b-kube-api-access-shhwn\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.711363 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5aebb1a-417f-4064-963c-1331aaf0f63b-run-httpd\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.710962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5aebb1a-417f-4064-963c-1331aaf0f63b-log-httpd\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.712288 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5aebb1a-417f-4064-963c-1331aaf0f63b-log-httpd\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.717229 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-scripts\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.717618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.717831 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.718094 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-config-data\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.724143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5aebb1a-417f-4064-963c-1331aaf0f63b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.727240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd6jj\" (UniqueName: \"kubernetes.io/projected/f5aebb1a-417f-4064-963c-1331aaf0f63b-kube-api-access-wd6jj\") pod \"ceilometer-0\" (UID: \"f5aebb1a-417f-4064-963c-1331aaf0f63b\") " pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.815160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.816308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-config-data\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.816420 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-scripts\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.816613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhwn\" (UniqueName: \"kubernetes.io/projected/b3843c17-8569-45a9-af71-95e31515609b-kube-api-access-shhwn\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.817492 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.821030 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.821031 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-config-data\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.821469 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-scripts\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.838120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhwn\" (UniqueName: \"kubernetes.io/projected/b3843c17-8569-45a9-af71-95e31515609b-kube-api-access-shhwn\") pod \"nova-cell1-cell-mapping-khj8v\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:09 crc kubenswrapper[4823]: I0121 17:40:09.905270 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:10 crc kubenswrapper[4823]: W0121 17:40:10.261293 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3843c17_8569_45a9_af71_95e31515609b.slice/crio-5f4adc6ae6226ecb30df4598a91e884e2e67358e589ef325fa4d06c23722d4b3 WatchSource:0}: Error finding container 5f4adc6ae6226ecb30df4598a91e884e2e67358e589ef325fa4d06c23722d4b3: Status 404 returned error can't find the container with id 5f4adc6ae6226ecb30df4598a91e884e2e67358e589ef325fa4d06c23722d4b3 Jan 21 17:40:10 crc kubenswrapper[4823]: I0121 17:40:10.269937 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-khj8v"] Jan 21 17:40:10 crc kubenswrapper[4823]: I0121 17:40:10.317204 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 17:40:10 crc kubenswrapper[4823]: I0121 17:40:10.319224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-khj8v" event={"ID":"b3843c17-8569-45a9-af71-95e31515609b","Type":"ContainerStarted","Data":"5f4adc6ae6226ecb30df4598a91e884e2e67358e589ef325fa4d06c23722d4b3"} Jan 21 17:40:10 crc kubenswrapper[4823]: W0121 17:40:10.329221 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5aebb1a_417f_4064_963c_1331aaf0f63b.slice/crio-03a3b2b966680ccfd7fcb3dbfd552615949d6d0cca56fb79abcaa40f66f8ee9c WatchSource:0}: Error finding container 03a3b2b966680ccfd7fcb3dbfd552615949d6d0cca56fb79abcaa40f66f8ee9c: Status 404 returned error can't find the container with id 03a3b2b966680ccfd7fcb3dbfd552615949d6d0cca56fb79abcaa40f66f8ee9c Jan 21 17:40:11 crc kubenswrapper[4823]: I0121 17:40:11.327684 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-khj8v" event={"ID":"b3843c17-8569-45a9-af71-95e31515609b","Type":"ContainerStarted","Data":"97440ed9364111db74d0dc90f42bf355673a068f500302ea526471d224b6aacc"} Jan 21 17:40:11 crc kubenswrapper[4823]: I0121 17:40:11.329078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5aebb1a-417f-4064-963c-1331aaf0f63b","Type":"ContainerStarted","Data":"f85555baadf2b50efb54d730a1aba6de0892955e09028dde2a5fcc2718f2e8d2"} Jan 21 17:40:11 crc kubenswrapper[4823]: I0121 17:40:11.329107 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5aebb1a-417f-4064-963c-1331aaf0f63b","Type":"ContainerStarted","Data":"03a3b2b966680ccfd7fcb3dbfd552615949d6d0cca56fb79abcaa40f66f8ee9c"} Jan 21 17:40:11 crc kubenswrapper[4823]: I0121 17:40:11.356905 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-khj8v" podStartSLOduration=2.356887113 podStartE2EDuration="2.356887113s" podCreationTimestamp="2026-01-21 17:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:11.352269598 +0000 UTC m=+1412.278400458" watchObservedRunningTime="2026-01-21 17:40:11.356887113 +0000 UTC m=+1412.283017973" Jan 21 17:40:11 crc kubenswrapper[4823]: I0121 17:40:11.372006 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7718a42b-8064-4db6-abc1-35a9d9954479" path="/var/lib/kubelet/pods/7718a42b-8064-4db6-abc1-35a9d9954479/volumes" Jan 21 17:40:11 crc kubenswrapper[4823]: I0121 17:40:11.373258 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71ed82f-a626-4fad-b864-da1b1ff313b9" path="/var/lib/kubelet/pods/e71ed82f-a626-4fad-b864-da1b1ff313b9/volumes" Jan 21 17:40:12 crc kubenswrapper[4823]: I0121 17:40:12.350671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5aebb1a-417f-4064-963c-1331aaf0f63b","Type":"ContainerStarted","Data":"16d506b77a6ab229b65234d0cf8ed398669e7a282a18db2209b53b81c73f950d"} Jan 21 17:40:13 crc kubenswrapper[4823]: I0121 17:40:13.364249 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5aebb1a-417f-4064-963c-1331aaf0f63b","Type":"ContainerStarted","Data":"765f80b9755b6eda295301fa428340ff7c6f09b360e7a36497a7fa5530787f52"} Jan 21 17:40:14 crc kubenswrapper[4823]: I0121 17:40:14.378723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5aebb1a-417f-4064-963c-1331aaf0f63b","Type":"ContainerStarted","Data":"cf9b5b306ec2306dd1a09f91c44665f9ca9be44e9089a8b73bcdea5236d47ba5"} Jan 21 17:40:14 crc kubenswrapper[4823]: I0121 17:40:14.381621 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 17:40:14 crc kubenswrapper[4823]: I0121 17:40:14.413688 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.306487708 podStartE2EDuration="5.4136598s" podCreationTimestamp="2026-01-21 17:40:09 +0000 UTC" firstStartedPulling="2026-01-21 17:40:10.331147976 +0000 UTC m=+1411.257278836" lastFinishedPulling="2026-01-21 17:40:13.438320048 +0000 UTC m=+1414.364450928" observedRunningTime="2026-01-21 17:40:14.402239118 +0000 UTC m=+1415.328370008" watchObservedRunningTime="2026-01-21 17:40:14.4136598 +0000 UTC m=+1415.339790700" Jan 21 17:40:14 crc kubenswrapper[4823]: I0121 17:40:14.607405 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:40:14 crc kubenswrapper[4823]: I0121 17:40:14.607464 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:40:15 crc kubenswrapper[4823]: I0121 17:40:15.631065 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:15 crc kubenswrapper[4823]: I0121 17:40:15.631180 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:16 crc kubenswrapper[4823]: I0121 17:40:16.415620 4823 generic.go:334] "Generic (PLEG): container finished" podID="b3843c17-8569-45a9-af71-95e31515609b" containerID="97440ed9364111db74d0dc90f42bf355673a068f500302ea526471d224b6aacc" exitCode=0 Jan 21 17:40:16 crc kubenswrapper[4823]: I0121 17:40:16.416417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-khj8v" event={"ID":"b3843c17-8569-45a9-af71-95e31515609b","Type":"ContainerDied","Data":"97440ed9364111db74d0dc90f42bf355673a068f500302ea526471d224b6aacc"} Jan 21 17:40:17 crc kubenswrapper[4823]: I0121 17:40:17.870607 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.014053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-combined-ca-bundle\") pod \"b3843c17-8569-45a9-af71-95e31515609b\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.014265 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-scripts\") pod \"b3843c17-8569-45a9-af71-95e31515609b\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.014579 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-config-data\") pod \"b3843c17-8569-45a9-af71-95e31515609b\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.014660 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhwn\" (UniqueName: \"kubernetes.io/projected/b3843c17-8569-45a9-af71-95e31515609b-kube-api-access-shhwn\") pod \"b3843c17-8569-45a9-af71-95e31515609b\" (UID: \"b3843c17-8569-45a9-af71-95e31515609b\") " Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.021673 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3843c17-8569-45a9-af71-95e31515609b-kube-api-access-shhwn" (OuterVolumeSpecName: "kube-api-access-shhwn") pod "b3843c17-8569-45a9-af71-95e31515609b" (UID: "b3843c17-8569-45a9-af71-95e31515609b"). InnerVolumeSpecName "kube-api-access-shhwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.022124 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-scripts" (OuterVolumeSpecName: "scripts") pod "b3843c17-8569-45a9-af71-95e31515609b" (UID: "b3843c17-8569-45a9-af71-95e31515609b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.050113 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-config-data" (OuterVolumeSpecName: "config-data") pod "b3843c17-8569-45a9-af71-95e31515609b" (UID: "b3843c17-8569-45a9-af71-95e31515609b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.057077 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3843c17-8569-45a9-af71-95e31515609b" (UID: "b3843c17-8569-45a9-af71-95e31515609b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.119823 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.119951 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.119973 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3843c17-8569-45a9-af71-95e31515609b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.119992 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shhwn\" (UniqueName: \"kubernetes.io/projected/b3843c17-8569-45a9-af71-95e31515609b-kube-api-access-shhwn\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.438826 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-khj8v" event={"ID":"b3843c17-8569-45a9-af71-95e31515609b","Type":"ContainerDied","Data":"5f4adc6ae6226ecb30df4598a91e884e2e67358e589ef325fa4d06c23722d4b3"} Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.438904 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f4adc6ae6226ecb30df4598a91e884e2e67358e589ef325fa4d06c23722d4b3" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.438917 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-khj8v" Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.622053 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.622495 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-log" containerID="cri-o://6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f" gracePeriod=30 Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.623260 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-api" containerID="cri-o://be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091" gracePeriod=30 Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.638357 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.638926 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" containerName="nova-scheduler-scheduler" containerID="cri-o://3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" gracePeriod=30 Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.671871 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.672162 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-log" containerID="cri-o://3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb" gracePeriod=30 Jan 21 17:40:18 crc kubenswrapper[4823]: I0121 17:40:18.672319 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-metadata" containerID="cri-o://b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc" gracePeriod=30 Jan 21 17:40:19 crc kubenswrapper[4823]: I0121 17:40:19.449843 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerID="6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f" exitCode=143 Jan 21 17:40:19 crc kubenswrapper[4823]: I0121 17:40:19.450176 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c606182-0ada-4745-8d1f-97c6c30c2255","Type":"ContainerDied","Data":"6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f"} Jan 21 17:40:19 crc kubenswrapper[4823]: I0121 17:40:19.451947 4823 generic.go:334] "Generic (PLEG): container finished" podID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerID="3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb" exitCode=143 Jan 21 17:40:19 crc kubenswrapper[4823]: I0121 17:40:19.451974 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bd250d7-7dc6-435f-acd6-77440e490b8e","Type":"ContainerDied","Data":"3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb"} Jan 21 17:40:20 crc kubenswrapper[4823]: E0121 17:40:20.279485 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 17:40:20 crc kubenswrapper[4823]: E0121 17:40:20.281543 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 17:40:20 crc kubenswrapper[4823]: E0121 17:40:20.284331 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 17:40:20 crc kubenswrapper[4823]: E0121 17:40:20.284382 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" containerName="nova-scheduler-scheduler" Jan 21 17:40:21 crc kubenswrapper[4823]: I0121 17:40:21.829920 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:55178->10.217.0.214:8775: read: connection reset by peer" Jan 21 17:40:21 crc kubenswrapper[4823]: I0121 17:40:21.830034 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:55180->10.217.0.214:8775: read: connection reset by peer" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.344221 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.351580 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.489718 4823 generic.go:334] "Generic (PLEG): container finished" podID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerID="b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc" exitCode=0 Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.489778 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.489787 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bd250d7-7dc6-435f-acd6-77440e490b8e","Type":"ContainerDied","Data":"b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc"} Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.489812 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bd250d7-7dc6-435f-acd6-77440e490b8e","Type":"ContainerDied","Data":"9b2bce08fddb42639db2c35647872f58b2198d408de1febd4cbc0d4f83099644"} Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.489827 4823 scope.go:117] "RemoveContainer" containerID="b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.494248 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.494288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c606182-0ada-4745-8d1f-97c6c30c2255","Type":"ContainerDied","Data":"be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091"} Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.494314 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerID="be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091" exitCode=0 Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.494379 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c606182-0ada-4745-8d1f-97c6c30c2255","Type":"ContainerDied","Data":"80fc775343dde305b4f52b99d7d5b88691a7631c80e9363e4ba197df209c7eb8"} Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.513984 4823 scope.go:117] "RemoveContainer" containerID="3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518326 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-nova-metadata-tls-certs\") pod \"0bd250d7-7dc6-435f-acd6-77440e490b8e\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518401 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-internal-tls-certs\") pod \"3c606182-0ada-4745-8d1f-97c6c30c2255\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518472 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-config-data\") pod \"3c606182-0ada-4745-8d1f-97c6c30c2255\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518490 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c606182-0ada-4745-8d1f-97c6c30c2255-logs\") pod \"3c606182-0ada-4745-8d1f-97c6c30c2255\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-config-data\") pod \"0bd250d7-7dc6-435f-acd6-77440e490b8e\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518573 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-combined-ca-bundle\") pod \"0bd250d7-7dc6-435f-acd6-77440e490b8e\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518603 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-combined-ca-bundle\") pod \"3c606182-0ada-4745-8d1f-97c6c30c2255\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518691 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-public-tls-certs\") pod \"3c606182-0ada-4745-8d1f-97c6c30c2255\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.518760 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lf8p\" (UniqueName: \"kubernetes.io/projected/3c606182-0ada-4745-8d1f-97c6c30c2255-kube-api-access-9lf8p\") pod \"3c606182-0ada-4745-8d1f-97c6c30c2255\" (UID: \"3c606182-0ada-4745-8d1f-97c6c30c2255\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.519174 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c606182-0ada-4745-8d1f-97c6c30c2255-logs" (OuterVolumeSpecName: "logs") pod "3c606182-0ada-4745-8d1f-97c6c30c2255" (UID: "3c606182-0ada-4745-8d1f-97c6c30c2255"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.519199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97c5r\" (UniqueName: \"kubernetes.io/projected/0bd250d7-7dc6-435f-acd6-77440e490b8e-kube-api-access-97c5r\") pod \"0bd250d7-7dc6-435f-acd6-77440e490b8e\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.519320 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd250d7-7dc6-435f-acd6-77440e490b8e-logs\") pod \"0bd250d7-7dc6-435f-acd6-77440e490b8e\" (UID: \"0bd250d7-7dc6-435f-acd6-77440e490b8e\") " Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.520042 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c606182-0ada-4745-8d1f-97c6c30c2255-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.520835 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd250d7-7dc6-435f-acd6-77440e490b8e-logs" (OuterVolumeSpecName: "logs") pod "0bd250d7-7dc6-435f-acd6-77440e490b8e" (UID: "0bd250d7-7dc6-435f-acd6-77440e490b8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.524154 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c606182-0ada-4745-8d1f-97c6c30c2255-kube-api-access-9lf8p" (OuterVolumeSpecName: "kube-api-access-9lf8p") pod "3c606182-0ada-4745-8d1f-97c6c30c2255" (UID: "3c606182-0ada-4745-8d1f-97c6c30c2255"). InnerVolumeSpecName "kube-api-access-9lf8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.525271 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd250d7-7dc6-435f-acd6-77440e490b8e-kube-api-access-97c5r" (OuterVolumeSpecName: "kube-api-access-97c5r") pod "0bd250d7-7dc6-435f-acd6-77440e490b8e" (UID: "0bd250d7-7dc6-435f-acd6-77440e490b8e"). InnerVolumeSpecName "kube-api-access-97c5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.550609 4823 scope.go:117] "RemoveContainer" containerID="b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.551101 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc\": container with ID starting with b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc not found: ID does not exist" containerID="b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.551133 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc"} err="failed to get container status \"b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc\": rpc error: code = NotFound desc = could not find container \"b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc\": container with ID starting with b73b09773a1a12c8ed5c8b1009399da6540a3d3ac6fcb86658e6286f3bff18dc not found: ID does not exist" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.551157 4823 scope.go:117] "RemoveContainer" containerID="3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.552816 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-config-data" (OuterVolumeSpecName: "config-data") pod "3c606182-0ada-4745-8d1f-97c6c30c2255" (UID: "3c606182-0ada-4745-8d1f-97c6c30c2255"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.553306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bd250d7-7dc6-435f-acd6-77440e490b8e" (UID: "0bd250d7-7dc6-435f-acd6-77440e490b8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.553359 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb\": container with ID starting with 3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb not found: ID does not exist" containerID="3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.553378 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb"} err="failed to get container status \"3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb\": rpc error: code = NotFound desc = could not find container \"3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb\": container with ID starting with 3f5b02f96089a3425391352b4707b0c840b2999805902ceb1527c7955e4985cb not found: ID does not exist" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.553394 4823 scope.go:117] "RemoveContainer" containerID="be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.561168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c606182-0ada-4745-8d1f-97c6c30c2255" (UID: "3c606182-0ada-4745-8d1f-97c6c30c2255"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.561401 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-config-data" (OuterVolumeSpecName: "config-data") pod "0bd250d7-7dc6-435f-acd6-77440e490b8e" (UID: "0bd250d7-7dc6-435f-acd6-77440e490b8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.586270 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3c606182-0ada-4745-8d1f-97c6c30c2255" (UID: "3c606182-0ada-4745-8d1f-97c6c30c2255"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.588131 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3c606182-0ada-4745-8d1f-97c6c30c2255" (UID: "3c606182-0ada-4745-8d1f-97c6c30c2255"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.603218 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0bd250d7-7dc6-435f-acd6-77440e490b8e" (UID: "0bd250d7-7dc6-435f-acd6-77440e490b8e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.606097 4823 scope.go:117] "RemoveContainer" containerID="6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622105 4823 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622147 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622159 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622170 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622181 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622190 4823 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c606182-0ada-4745-8d1f-97c6c30c2255-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622202 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lf8p\" (UniqueName: \"kubernetes.io/projected/3c606182-0ada-4745-8d1f-97c6c30c2255-kube-api-access-9lf8p\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622215 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97c5r\" (UniqueName: \"kubernetes.io/projected/0bd250d7-7dc6-435f-acd6-77440e490b8e-kube-api-access-97c5r\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622226 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd250d7-7dc6-435f-acd6-77440e490b8e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.622236 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd250d7-7dc6-435f-acd6-77440e490b8e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.629505 4823 scope.go:117] "RemoveContainer" containerID="be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.630023 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091\": container with ID starting with be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091 not found: ID does not exist" containerID="be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.630078 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091"} err="failed to get container status \"be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091\": rpc error: code = NotFound desc = could not find container \"be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091\": container with ID starting with be846e3af633c7b1db6599f766df76729044abb68d3fe8967a64ec566e8ca091 not found: ID does not exist" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.630107 4823 scope.go:117] "RemoveContainer" containerID="6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.630372 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f\": container with ID starting with 6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f not found: ID does not exist" containerID="6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.630429 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f"} err="failed to get container status \"6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f\": rpc error: code = NotFound desc = could not find container \"6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f\": container with ID starting with 6399e933375982ce8018121576b3c7acfc69857171d122fd5a900613a5632d8f not found: ID does not exist" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.828384 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.843369 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.862746 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.872498 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.901617 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.903206 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-log" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903228 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-log" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.903255 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-metadata" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903262 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-metadata" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.903279 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-log" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903285 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-log" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.903315 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3843c17-8569-45a9-af71-95e31515609b" containerName="nova-manage" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903320 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3843c17-8569-45a9-af71-95e31515609b" containerName="nova-manage" Jan 21 17:40:22 crc kubenswrapper[4823]: E0121 17:40:22.903347 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-api" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903355 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-api" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903694 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-log" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903709 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3843c17-8569-45a9-af71-95e31515609b" containerName="nova-manage" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903735 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-metadata" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903746 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" containerName="nova-metadata-log" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.903765 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" containerName="nova-api-api" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.906255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.917708 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.917800 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.921595 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.927081 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.934470 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.934757 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.934948 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.941157 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:40:22 crc kubenswrapper[4823]: I0121 17:40:22.967476 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031663 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-config-data\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031735 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031765 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcsrt\" (UniqueName: \"kubernetes.io/projected/28ce89b1-4004-4486-872c-87d8965725da-kube-api-access-gcsrt\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031789 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031841 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ce89b1-4004-4486-872c-87d8965725da-logs\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.031967 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drmv2\" (UniqueName: \"kubernetes.io/projected/8e688551-fa1e-41c2-ae1e-18ca5073c34e-kube-api-access-drmv2\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.032027 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.032068 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-config-data\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.032116 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e688551-fa1e-41c2-ae1e-18ca5073c34e-logs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.032137 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134172 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-config-data\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134247 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e688551-fa1e-41c2-ae1e-18ca5073c34e-logs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134270 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-config-data\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134342 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134368 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcsrt\" (UniqueName: \"kubernetes.io/projected/28ce89b1-4004-4486-872c-87d8965725da-kube-api-access-gcsrt\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134391 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134436 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ce89b1-4004-4486-872c-87d8965725da-logs\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134541 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drmv2\" (UniqueName: \"kubernetes.io/projected/8e688551-fa1e-41c2-ae1e-18ca5073c34e-kube-api-access-drmv2\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134589 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.134969 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e688551-fa1e-41c2-ae1e-18ca5073c34e-logs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.135528 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ce89b1-4004-4486-872c-87d8965725da-logs\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.138226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-config-data\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.139454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.139474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.140440 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-config-data\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.141594 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ce89b1-4004-4486-872c-87d8965725da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.141794 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.147456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e688551-fa1e-41c2-ae1e-18ca5073c34e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.150396 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drmv2\" (UniqueName: \"kubernetes.io/projected/8e688551-fa1e-41c2-ae1e-18ca5073c34e-kube-api-access-drmv2\") pod \"nova-api-0\" (UID: \"8e688551-fa1e-41c2-ae1e-18ca5073c34e\") " pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.152082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcsrt\" (UniqueName: \"kubernetes.io/projected/28ce89b1-4004-4486-872c-87d8965725da-kube-api-access-gcsrt\") pod \"nova-metadata-0\" (UID: \"28ce89b1-4004-4486-872c-87d8965725da\") " pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.269119 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.281276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.363414 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd250d7-7dc6-435f-acd6-77440e490b8e" path="/var/lib/kubelet/pods/0bd250d7-7dc6-435f-acd6-77440e490b8e/volumes" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.364730 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c606182-0ada-4745-8d1f-97c6c30c2255" path="/var/lib/kubelet/pods/3c606182-0ada-4745-8d1f-97c6c30c2255/volumes" Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.755725 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 17:40:23 crc kubenswrapper[4823]: I0121 17:40:23.831227 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 17:40:23 crc kubenswrapper[4823]: W0121 17:40:23.836089 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ce89b1_4004_4486_872c_87d8965725da.slice/crio-7e7a6fe31a763b49a12b822a7c5048e0495029b8dd0015a7bc27a7725bf65a32 WatchSource:0}: Error finding container 7e7a6fe31a763b49a12b822a7c5048e0495029b8dd0015a7bc27a7725bf65a32: Status 404 returned error can't find the container with id 7e7a6fe31a763b49a12b822a7c5048e0495029b8dd0015a7bc27a7725bf65a32 Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.325040 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.472804 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-combined-ca-bundle\") pod \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.472884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-config-data\") pod \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.472938 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxdr\" (UniqueName: \"kubernetes.io/projected/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-kube-api-access-rfxdr\") pod \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\" (UID: \"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b\") " Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.477135 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-kube-api-access-rfxdr" (OuterVolumeSpecName: "kube-api-access-rfxdr") pod "e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" (UID: "e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b"). InnerVolumeSpecName "kube-api-access-rfxdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.510113 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" (UID: "e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.510971 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-config-data" (OuterVolumeSpecName: "config-data") pod "e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" (UID: "e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.527029 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e688551-fa1e-41c2-ae1e-18ca5073c34e","Type":"ContainerStarted","Data":"370442a6176819cd910a592ab73411bc7e7958cc6ff640dba6b04d4a63787fea"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.527086 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e688551-fa1e-41c2-ae1e-18ca5073c34e","Type":"ContainerStarted","Data":"0184c63ca98b4310b5ce49d363450a31affcd0f5569517990531c9aa003252bf"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.527098 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e688551-fa1e-41c2-ae1e-18ca5073c34e","Type":"ContainerStarted","Data":"60398d998dd225ba4d64ae2ca7af0ebb90f4584ae9765ec572997a28e49982ae"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.529492 4823 generic.go:334] "Generic (PLEG): container finished" podID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" exitCode=0 Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.529553 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b","Type":"ContainerDied","Data":"3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.529579 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b","Type":"ContainerDied","Data":"20fd088a6859bf5e96855b7b9b8748056f4722dc24e383822d62850849f34e24"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.529595 4823 scope.go:117] "RemoveContainer" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.529698 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.547777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28ce89b1-4004-4486-872c-87d8965725da","Type":"ContainerStarted","Data":"973457304f39c7b5567b18802bd9597cc659beca4675a25d00d80f84ad01fe47"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.547817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28ce89b1-4004-4486-872c-87d8965725da","Type":"ContainerStarted","Data":"d8c9d4aefa8a76bd06f649101c0c241d05db5fd345a51ad4772dcef658420ef0"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.547827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28ce89b1-4004-4486-872c-87d8965725da","Type":"ContainerStarted","Data":"7e7a6fe31a763b49a12b822a7c5048e0495029b8dd0015a7bc27a7725bf65a32"} Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.564535 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5645112389999998 podStartE2EDuration="2.564511239s" podCreationTimestamp="2026-01-21 17:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:24.559956597 +0000 UTC m=+1425.486087467" watchObservedRunningTime="2026-01-21 17:40:24.564511239 +0000 UTC m=+1425.490642099" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.569448 4823 scope.go:117] "RemoveContainer" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" Jan 21 17:40:24 crc kubenswrapper[4823]: E0121 17:40:24.571008 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198\": container with ID starting with 3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198 not found: ID does not exist" containerID="3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.571048 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198"} err="failed to get container status \"3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198\": rpc error: code = NotFound desc = could not find container \"3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198\": container with ID starting with 3d1548f70f354a6d5c6fb5f98cfda51b648968974c56aeb1c761e988f409b198 not found: ID does not exist" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.576567 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.576612 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.576644 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxdr\" (UniqueName: \"kubernetes.io/projected/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b-kube-api-access-rfxdr\") on node \"crc\" DevicePath \"\"" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.595531 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.595507276 podStartE2EDuration="2.595507276s" podCreationTimestamp="2026-01-21 17:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:24.581764546 +0000 UTC m=+1425.507895406" watchObservedRunningTime="2026-01-21 17:40:24.595507276 +0000 UTC m=+1425.521638146" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.616069 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.646695 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.646965 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:40:24 crc kubenswrapper[4823]: E0121 17:40:24.647400 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" containerName="nova-scheduler-scheduler" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.647460 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" containerName="nova-scheduler-scheduler" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.647719 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" containerName="nova-scheduler-scheduler" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.648290 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.648424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.683665 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.787206 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4384a0e-44ee-479f-bd81-f3b486c71da8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.787262 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99z5h\" (UniqueName: \"kubernetes.io/projected/e4384a0e-44ee-479f-bd81-f3b486c71da8-kube-api-access-99z5h\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.787294 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4384a0e-44ee-479f-bd81-f3b486c71da8-config-data\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.888837 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4384a0e-44ee-479f-bd81-f3b486c71da8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.889174 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99z5h\" (UniqueName: \"kubernetes.io/projected/e4384a0e-44ee-479f-bd81-f3b486c71da8-kube-api-access-99z5h\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.889200 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4384a0e-44ee-479f-bd81-f3b486c71da8-config-data\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.894526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4384a0e-44ee-479f-bd81-f3b486c71da8-config-data\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.894621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4384a0e-44ee-479f-bd81-f3b486c71da8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:24 crc kubenswrapper[4823]: I0121 17:40:24.908154 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99z5h\" (UniqueName: \"kubernetes.io/projected/e4384a0e-44ee-479f-bd81-f3b486c71da8-kube-api-access-99z5h\") pod \"nova-scheduler-0\" (UID: \"e4384a0e-44ee-479f-bd81-f3b486c71da8\") " pod="openstack/nova-scheduler-0" Jan 21 17:40:25 crc kubenswrapper[4823]: I0121 17:40:25.016891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 17:40:25 crc kubenswrapper[4823]: I0121 17:40:25.359287 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b" path="/var/lib/kubelet/pods/e7cf1f05-0aa5-4856-9f0a-0c8e4e7a333b/volumes" Jan 21 17:40:25 crc kubenswrapper[4823]: I0121 17:40:25.554755 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 17:40:26 crc kubenswrapper[4823]: I0121 17:40:26.574349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4384a0e-44ee-479f-bd81-f3b486c71da8","Type":"ContainerStarted","Data":"f5b13ee0d82b7f312b6670b9eae45cc71a9d81e781a1f396111326ed28bc8bea"} Jan 21 17:40:26 crc kubenswrapper[4823]: I0121 17:40:26.574663 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4384a0e-44ee-479f-bd81-f3b486c71da8","Type":"ContainerStarted","Data":"76e8ad9bff27cbd5b2fcf149b32ec2cc0a6c02018e8ffa10bd2dc74b37590df5"} Jan 21 17:40:26 crc kubenswrapper[4823]: I0121 17:40:26.597575 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.597552997 podStartE2EDuration="2.597552997s" podCreationTimestamp="2026-01-21 17:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:40:26.589731094 +0000 UTC m=+1427.515861954" watchObservedRunningTime="2026-01-21 17:40:26.597552997 +0000 UTC m=+1427.523683857" Jan 21 17:40:28 crc kubenswrapper[4823]: I0121 17:40:28.269526 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 17:40:28 crc kubenswrapper[4823]: I0121 17:40:28.269894 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 17:40:30 crc kubenswrapper[4823]: I0121 17:40:30.017653 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 17:40:33 crc kubenswrapper[4823]: I0121 17:40:33.269663 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 17:40:33 crc kubenswrapper[4823]: I0121 17:40:33.270130 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 17:40:33 crc kubenswrapper[4823]: I0121 17:40:33.282734 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:40:33 crc kubenswrapper[4823]: I0121 17:40:33.282797 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 17:40:34 crc kubenswrapper[4823]: I0121 17:40:34.288102 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="28ce89b1-4004-4486-872c-87d8965725da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:34 crc kubenswrapper[4823]: I0121 17:40:34.288116 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="28ce89b1-4004-4486-872c-87d8965725da" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:34 crc kubenswrapper[4823]: I0121 17:40:34.300081 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e688551-fa1e-41c2-ae1e-18ca5073c34e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:34 crc kubenswrapper[4823]: I0121 17:40:34.300496 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e688551-fa1e-41c2-ae1e-18ca5073c34e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:35 crc kubenswrapper[4823]: I0121 17:40:35.017786 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 17:40:35 crc kubenswrapper[4823]: I0121 17:40:35.064523 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 17:40:35 crc kubenswrapper[4823]: I0121 17:40:35.737235 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 17:40:39 crc kubenswrapper[4823]: I0121 17:40:39.826597 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.274808 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.277979 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.281863 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.295526 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.296002 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.306834 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.309357 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.790241 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.796387 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 17:40:43 crc kubenswrapper[4823]: I0121 17:40:43.796982 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 17:40:52 crc kubenswrapper[4823]: I0121 17:40:52.690643 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:40:53 crc kubenswrapper[4823]: I0121 17:40:53.594067 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:40:56 crc kubenswrapper[4823]: I0121 17:40:56.783220 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w8s6p" podUID="49bc570b-b84d-48a3-b322-95b9ece80f26" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:57 crc kubenswrapper[4823]: I0121 17:40:57.647010 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wml2h container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 17:40:57 crc kubenswrapper[4823]: I0121 17:40:57.647395 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" podUID="3536b7cb-5def-4468-9282-897a30251cd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:57 crc kubenswrapper[4823]: I0121 17:40:57.647495 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wml2h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 17:40:57 crc kubenswrapper[4823]: I0121 17:40:57.647513 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wml2h" podUID="3536b7cb-5def-4468-9282-897a30251cd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 17:40:58 crc kubenswrapper[4823]: I0121 17:40:58.375528 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="rabbitmq" containerID="cri-o://1ebf3a1bc690a6fb2b3fbf1fd41bc007dc0aa35efc3e7bfe807612fc6f7960d4" gracePeriod=604795 Jan 21 17:40:59 crc kubenswrapper[4823]: I0121 17:40:59.935271 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="rabbitmq" containerID="cri-o://37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6" gracePeriod=604794 Jan 21 17:41:02 crc kubenswrapper[4823]: I0121 17:41:02.112154 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 21 17:41:02 crc kubenswrapper[4823]: I0121 17:41:02.350402 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.011756 4823 generic.go:334] "Generic (PLEG): container finished" podID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerID="1ebf3a1bc690a6fb2b3fbf1fd41bc007dc0aa35efc3e7bfe807612fc6f7960d4" exitCode=0 Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.011956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"619d3aad-c1a1-4d30-ac6f-a0b9535371dc","Type":"ContainerDied","Data":"1ebf3a1bc690a6fb2b3fbf1fd41bc007dc0aa35efc3e7bfe807612fc6f7960d4"} Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.893272 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.973739 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-erlang-cookie\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.973828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6jht\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-kube-api-access-p6jht\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.973947 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-pod-info\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.973972 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-tls\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-server-conf\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974088 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-config-data\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974147 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-plugins\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974198 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-erlang-cookie-secret\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-confd\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974244 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.974272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-plugins-conf\") pod \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\" (UID: \"619d3aad-c1a1-4d30-ac6f-a0b9535371dc\") " Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.975466 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.976492 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.977667 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.987801 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.988909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-pod-info" (OuterVolumeSpecName: "pod-info") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.994149 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-kube-api-access-p6jht" (OuterVolumeSpecName: "kube-api-access-p6jht") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "kube-api-access-p6jht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:05 crc kubenswrapper[4823]: I0121 17:41:05.998291 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.025598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.053248 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-config-data" (OuterVolumeSpecName: "config-data") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.053617 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"619d3aad-c1a1-4d30-ac6f-a0b9535371dc","Type":"ContainerDied","Data":"f9837f691cfa99ff8cda01afeea49c100bc9c3e83542ce93e3d4b74d81f7a8f7"} Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.053668 4823 scope.go:117] "RemoveContainer" containerID="1ebf3a1bc690a6fb2b3fbf1fd41bc007dc0aa35efc3e7bfe807612fc6f7960d4" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.053713 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087501 4823 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087808 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087825 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087836 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087847 4823 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087906 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087918 4823 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087930 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.087944 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6jht\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-kube-api-access-p6jht\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.094116 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-server-conf" (OuterVolumeSpecName: "server-conf") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.128491 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.189521 4823 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.189559 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.189694 4823 scope.go:117] "RemoveContainer" containerID="bf8050866cc839063b75d98e01383637e0e7b6575b76d4460b5d2bb06be62370" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.216921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "619d3aad-c1a1-4d30-ac6f-a0b9535371dc" (UID: "619d3aad-c1a1-4d30-ac6f-a0b9535371dc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.290890 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/619d3aad-c1a1-4d30-ac6f-a0b9535371dc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.393032 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.402638 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.425197 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:41:06 crc kubenswrapper[4823]: E0121 17:41:06.425605 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="rabbitmq" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.425622 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="rabbitmq" Jan 21 17:41:06 crc kubenswrapper[4823]: E0121 17:41:06.425637 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="setup-container" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.425643 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="setup-container" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.425819 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" containerName="rabbitmq" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.428723 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.433797 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.433870 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.433968 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.436980 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.437801 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8pdgl" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.437955 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.441691 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.466909 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598176 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598215 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598311 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b31dcb7b-15e2-4a14-bdab-d2887043e52a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598364 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2tx\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-kube-api-access-dx2tx\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598455 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b31dcb7b-15e2-4a14-bdab-d2887043e52a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598513 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.598534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702212 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702279 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702342 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702368 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702474 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b31dcb7b-15e2-4a14-bdab-d2887043e52a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2tx\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-kube-api-access-dx2tx\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b31dcb7b-15e2-4a14-bdab-d2887043e52a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.702704 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.703695 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.703956 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.714189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.714630 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.714767 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b31dcb7b-15e2-4a14-bdab-d2887043e52a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.715501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.716591 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b31dcb7b-15e2-4a14-bdab-d2887043e52a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.739683 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b31dcb7b-15e2-4a14-bdab-d2887043e52a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.750163 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.751610 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.758082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.762792 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2tx\" (UniqueName: \"kubernetes.io/projected/b31dcb7b-15e2-4a14-bdab-d2887043e52a-kube-api-access-dx2tx\") pod \"rabbitmq-server-0\" (UID: \"b31dcb7b-15e2-4a14-bdab-d2887043e52a\") " pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.821885 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 17:41:06 crc kubenswrapper[4823]: I0121 17:41:06.990571 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.092003 4823 generic.go:334] "Generic (PLEG): container finished" podID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerID="37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6" exitCode=0 Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.092284 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dd8ea30-a041-4ce6-8a36-b8a355b076dc","Type":"ContainerDied","Data":"37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6"} Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.092320 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dd8ea30-a041-4ce6-8a36-b8a355b076dc","Type":"ContainerDied","Data":"84f86a4d0e95ecb61b3e5d4a613052d9019a0e5e52893cf676b0681ac6e79f48"} Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.092336 4823 scope.go:117] "RemoveContainer" containerID="37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.092484 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.120756 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-pod-info\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.120835 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-erlang-cookie-secret\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.120986 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-plugins\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121037 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-erlang-cookie\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121059 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95dln\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-kube-api-access-95dln\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121181 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-confd\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121248 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121297 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-config-data\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121317 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-plugins-conf\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121357 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-server-conf\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.121425 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-tls\") pod \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\" (UID: \"4dd8ea30-a041-4ce6-8a36-b8a355b076dc\") " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.122206 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.122707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.127271 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.131314 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-pod-info" (OuterVolumeSpecName: "pod-info") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.131516 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.131661 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-kube-api-access-95dln" (OuterVolumeSpecName: "kube-api-access-95dln") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "kube-api-access-95dln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.132531 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.146516 4823 scope.go:117] "RemoveContainer" containerID="72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.151209 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.191496 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-server-conf" (OuterVolumeSpecName: "server-conf") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.200580 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-config-data" (OuterVolumeSpecName: "config-data") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.214506 4823 scope.go:117] "RemoveContainer" containerID="37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6" Jan 21 17:41:07 crc kubenswrapper[4823]: E0121 17:41:07.215047 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6\": container with ID starting with 37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6 not found: ID does not exist" containerID="37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.215092 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6"} err="failed to get container status \"37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6\": rpc error: code = NotFound desc = could not find container \"37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6\": container with ID starting with 37877ae5c4dadb68e9a1e3e74f5d516bc6ed5f32ebf8f8f52e3865cc2236a1a6 not found: ID does not exist" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.215118 4823 scope.go:117] "RemoveContainer" containerID="72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07" Jan 21 17:41:07 crc kubenswrapper[4823]: E0121 17:41:07.217480 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07\": container with ID starting with 72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07 not found: ID does not exist" containerID="72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.217563 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07"} err="failed to get container status \"72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07\": rpc error: code = NotFound desc = could not find container \"72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07\": container with ID starting with 72d5420b9266599ae1478c16debbb2e88299c6037de92ed86e0fd441463e9d07 not found: ID does not exist" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223418 4823 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223460 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223469 4823 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223478 4823 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223487 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223496 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223506 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95dln\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-kube-api-access-95dln\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223535 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223547 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.223556 4823 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.261407 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.305320 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4dd8ea30-a041-4ce6-8a36-b8a355b076dc" (UID: "4dd8ea30-a041-4ce6-8a36-b8a355b076dc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.325708 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dd8ea30-a041-4ce6-8a36-b8a355b076dc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.325742 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.358412 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619d3aad-c1a1-4d30-ac6f-a0b9535371dc" path="/var/lib/kubelet/pods/619d3aad-c1a1-4d30-ac6f-a0b9535371dc/volumes" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.431100 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.443282 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.456459 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:41:07 crc kubenswrapper[4823]: E0121 17:41:07.457163 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="rabbitmq" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.457254 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="rabbitmq" Jan 21 17:41:07 crc kubenswrapper[4823]: E0121 17:41:07.457330 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="setup-container" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.457390 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="setup-container" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.457637 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" containerName="rabbitmq" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.459051 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.464052 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.464206 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bhgxm" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.464464 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.464559 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.464682 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.464844 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.465070 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.495108 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.511327 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.531787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.531960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532018 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ad50cd2-2f93-4a56-aa86-8b81e205531e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532077 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532364 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nggh\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-kube-api-access-9nggh\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532416 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ad50cd2-2f93-4a56-aa86-8b81e205531e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532531 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532555 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532618 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.532651 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.634935 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635086 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ad50cd2-2f93-4a56-aa86-8b81e205531e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nggh\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-kube-api-access-9nggh\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635214 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ad50cd2-2f93-4a56-aa86-8b81e205531e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635310 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635356 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.635435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.636348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.636644 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.637352 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.637441 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.637814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ad50cd2-2f93-4a56-aa86-8b81e205531e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.639494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ad50cd2-2f93-4a56-aa86-8b81e205531e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.640170 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ad50cd2-2f93-4a56-aa86-8b81e205531e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.641988 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.643710 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.663310 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nggh\" (UniqueName: \"kubernetes.io/projected/8ad50cd2-2f93-4a56-aa86-8b81e205531e-kube-api-access-9nggh\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.679643 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ad50cd2-2f93-4a56-aa86-8b81e205531e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:07 crc kubenswrapper[4823]: I0121 17:41:07.784809 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:08 crc kubenswrapper[4823]: I0121 17:41:08.117532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b31dcb7b-15e2-4a14-bdab-d2887043e52a","Type":"ContainerStarted","Data":"eecbab79fad210747cadc7704e4e6000f3f2ac621dd4074559ae94691f0d4aca"} Jan 21 17:41:08 crc kubenswrapper[4823]: I0121 17:41:08.260634 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 17:41:08 crc kubenswrapper[4823]: W0121 17:41:08.275758 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ad50cd2_2f93_4a56_aa86_8b81e205531e.slice/crio-b1b72260683556ff0f5aaca7f3e52e9e0e8e2f40d0032c493757494202603f81 WatchSource:0}: Error finding container b1b72260683556ff0f5aaca7f3e52e9e0e8e2f40d0032c493757494202603f81: Status 404 returned error can't find the container with id b1b72260683556ff0f5aaca7f3e52e9e0e8e2f40d0032c493757494202603f81 Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.129736 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b31dcb7b-15e2-4a14-bdab-d2887043e52a","Type":"ContainerStarted","Data":"9544e2706027bb50717847a5171de9a67a6c50e908f5253e2d732f8f4e6b4486"} Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.131255 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ad50cd2-2f93-4a56-aa86-8b81e205531e","Type":"ContainerStarted","Data":"b1b72260683556ff0f5aaca7f3e52e9e0e8e2f40d0032c493757494202603f81"} Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.167595 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zvf4z"] Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.169423 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.172306 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.201767 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zvf4z"] Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.269946 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.270330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.270419 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.270613 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-svc\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.270778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-config\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.270901 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dnm2\" (UniqueName: \"kubernetes.io/projected/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-kube-api-access-6dnm2\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.271095 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.356057 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd8ea30-a041-4ce6-8a36-b8a355b076dc" path="/var/lib/kubelet/pods/4dd8ea30-a041-4ce6-8a36-b8a355b076dc/volumes" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373348 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-config\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373425 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dnm2\" (UniqueName: \"kubernetes.io/projected/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-kube-api-access-6dnm2\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373469 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373547 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373646 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.373717 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-svc\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.374550 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.374562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.374645 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-config\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.374667 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.374937 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-svc\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.377539 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.405739 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dnm2\" (UniqueName: \"kubernetes.io/projected/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-kube-api-access-6dnm2\") pod \"dnsmasq-dns-d558885bc-zvf4z\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:09 crc kubenswrapper[4823]: I0121 17:41:09.495103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:10 crc kubenswrapper[4823]: I0121 17:41:10.089786 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zvf4z"] Jan 21 17:41:10 crc kubenswrapper[4823]: W0121 17:41:10.099233 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92777401_b4fc_4ac9_bd12_50c2be1b6fb9.slice/crio-c83b76bf6ad7f4af6d32f34aae60dd8b031ee6c63eb41ed8987552272be00a53 WatchSource:0}: Error finding container c83b76bf6ad7f4af6d32f34aae60dd8b031ee6c63eb41ed8987552272be00a53: Status 404 returned error can't find the container with id c83b76bf6ad7f4af6d32f34aae60dd8b031ee6c63eb41ed8987552272be00a53 Jan 21 17:41:10 crc kubenswrapper[4823]: I0121 17:41:10.176559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ad50cd2-2f93-4a56-aa86-8b81e205531e","Type":"ContainerStarted","Data":"403812f0521785852c5de57d25e55008a39d42034ea06b4f39ab33491e8a9bac"} Jan 21 17:41:10 crc kubenswrapper[4823]: I0121 17:41:10.185357 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" event={"ID":"92777401-b4fc-4ac9-bd12-50c2be1b6fb9","Type":"ContainerStarted","Data":"c83b76bf6ad7f4af6d32f34aae60dd8b031ee6c63eb41ed8987552272be00a53"} Jan 21 17:41:11 crc kubenswrapper[4823]: I0121 17:41:11.194714 4823 generic.go:334] "Generic (PLEG): container finished" podID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerID="315ae69658cf6122905d05ad55778b20f0bb1d111c97bf07b6f9debce7143c52" exitCode=0 Jan 21 17:41:11 crc kubenswrapper[4823]: I0121 17:41:11.194781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" event={"ID":"92777401-b4fc-4ac9-bd12-50c2be1b6fb9","Type":"ContainerDied","Data":"315ae69658cf6122905d05ad55778b20f0bb1d111c97bf07b6f9debce7143c52"} Jan 21 17:41:12 crc kubenswrapper[4823]: I0121 17:41:12.206134 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" event={"ID":"92777401-b4fc-4ac9-bd12-50c2be1b6fb9","Type":"ContainerStarted","Data":"84142bc4cf623c951f9ea2268ec94f02527199f77af156cbfdc571d6809c58da"} Jan 21 17:41:12 crc kubenswrapper[4823]: I0121 17:41:12.206403 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:12 crc kubenswrapper[4823]: I0121 17:41:12.236677 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" podStartSLOduration=3.236649226 podStartE2EDuration="3.236649226s" podCreationTimestamp="2026-01-21 17:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:41:12.226646228 +0000 UTC m=+1473.152777088" watchObservedRunningTime="2026-01-21 17:41:12.236649226 +0000 UTC m=+1473.162780126" Jan 21 17:41:15 crc kubenswrapper[4823]: I0121 17:41:15.070757 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:41:15 crc kubenswrapper[4823]: I0121 17:41:15.071256 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.498098 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.568924 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kd7pm"] Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.569236 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerName="dnsmasq-dns" containerID="cri-o://0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006" gracePeriod=10 Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.728677 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-mt792"] Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.730758 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.749337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-mt792"] Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899264 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-config\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899362 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899525 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899640 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899696 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:19 crc kubenswrapper[4823]: I0121 17:41:19.899753 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tgq\" (UniqueName: \"kubernetes.io/projected/01324a9e-711d-4754-8e1c-4d2ce8ae749a-kube-api-access-n4tgq\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001494 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001525 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001581 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4tgq\" (UniqueName: \"kubernetes.io/projected/01324a9e-711d-4754-8e1c-4d2ce8ae749a-kube-api-access-n4tgq\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-config\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.001691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.002873 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.003154 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.003445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.003689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.003824 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-config\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.004292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01324a9e-711d-4754-8e1c-4d2ce8ae749a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.023640 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4tgq\" (UniqueName: \"kubernetes.io/projected/01324a9e-711d-4754-8e1c-4d2ce8ae749a-kube-api-access-n4tgq\") pod \"dnsmasq-dns-6b6dc74c5-mt792\" (UID: \"01324a9e-711d-4754-8e1c-4d2ce8ae749a\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.072705 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.234407 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.284340 4823 generic.go:334] "Generic (PLEG): container finished" podID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerID="0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006" exitCode=0 Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.284383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" event={"ID":"0b62a6b8-398b-429c-b074-9e3db44b8449","Type":"ContainerDied","Data":"0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006"} Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.284408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" event={"ID":"0b62a6b8-398b-429c-b074-9e3db44b8449","Type":"ContainerDied","Data":"43b23b08929b868699110fb395e3cb9abf8c0d917ef778939bd6d90a3e00bd73"} Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.284427 4823 scope.go:117] "RemoveContainer" containerID="0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.284605 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kd7pm" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.306227 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-svc\") pod \"0b62a6b8-398b-429c-b074-9e3db44b8449\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.306334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-swift-storage-0\") pod \"0b62a6b8-398b-429c-b074-9e3db44b8449\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.306374 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-config\") pod \"0b62a6b8-398b-429c-b074-9e3db44b8449\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.306431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-nb\") pod \"0b62a6b8-398b-429c-b074-9e3db44b8449\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.306466 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn4fp\" (UniqueName: \"kubernetes.io/projected/0b62a6b8-398b-429c-b074-9e3db44b8449-kube-api-access-kn4fp\") pod \"0b62a6b8-398b-429c-b074-9e3db44b8449\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.306488 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-sb\") pod \"0b62a6b8-398b-429c-b074-9e3db44b8449\" (UID: \"0b62a6b8-398b-429c-b074-9e3db44b8449\") " Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.319984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b62a6b8-398b-429c-b074-9e3db44b8449-kube-api-access-kn4fp" (OuterVolumeSpecName: "kube-api-access-kn4fp") pod "0b62a6b8-398b-429c-b074-9e3db44b8449" (UID: "0b62a6b8-398b-429c-b074-9e3db44b8449"). InnerVolumeSpecName "kube-api-access-kn4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.332083 4823 scope.go:117] "RemoveContainer" containerID="3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.371344 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b62a6b8-398b-429c-b074-9e3db44b8449" (UID: "0b62a6b8-398b-429c-b074-9e3db44b8449"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.377904 4823 scope.go:117] "RemoveContainer" containerID="0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006" Jan 21 17:41:20 crc kubenswrapper[4823]: E0121 17:41:20.378452 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006\": container with ID starting with 0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006 not found: ID does not exist" containerID="0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.378488 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006"} err="failed to get container status \"0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006\": rpc error: code = NotFound desc = could not find container \"0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006\": container with ID starting with 0c05aca6950168af2ba07a7cdb109215f1f48c812f85ffd21992bf4834b11006 not found: ID does not exist" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.378508 4823 scope.go:117] "RemoveContainer" containerID="3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b" Jan 21 17:41:20 crc kubenswrapper[4823]: E0121 17:41:20.378821 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b\": container with ID starting with 3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b not found: ID does not exist" containerID="3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.378888 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b"} err="failed to get container status \"3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b\": rpc error: code = NotFound desc = could not find container \"3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b\": container with ID starting with 3116791a7b51c63f3192ccc62b5f3df7975524fceb6cc6c2dee301f75495fc1b not found: ID does not exist" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.379248 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0b62a6b8-398b-429c-b074-9e3db44b8449" (UID: "0b62a6b8-398b-429c-b074-9e3db44b8449"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.399415 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0b62a6b8-398b-429c-b074-9e3db44b8449" (UID: "0b62a6b8-398b-429c-b074-9e3db44b8449"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.410634 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.410690 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn4fp\" (UniqueName: \"kubernetes.io/projected/0b62a6b8-398b-429c-b074-9e3db44b8449-kube-api-access-kn4fp\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.410705 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.410715 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.424540 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0b62a6b8-398b-429c-b074-9e3db44b8449" (UID: "0b62a6b8-398b-429c-b074-9e3db44b8449"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.522212 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.524556 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-config" (OuterVolumeSpecName: "config") pod "0b62a6b8-398b-429c-b074-9e3db44b8449" (UID: "0b62a6b8-398b-429c-b074-9e3db44b8449"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.628304 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62a6b8-398b-429c-b074-9e3db44b8449-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:20 crc kubenswrapper[4823]: W0121 17:41:20.630941 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01324a9e_711d_4754_8e1c_4d2ce8ae749a.slice/crio-a74b7685461bb7bc62d4ff62ca2c61b97efe835ea830bee1d63254b853557df9 WatchSource:0}: Error finding container a74b7685461bb7bc62d4ff62ca2c61b97efe835ea830bee1d63254b853557df9: Status 404 returned error can't find the container with id a74b7685461bb7bc62d4ff62ca2c61b97efe835ea830bee1d63254b853557df9 Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.631917 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-mt792"] Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.661850 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kd7pm"] Jan 21 17:41:20 crc kubenswrapper[4823]: I0121 17:41:20.673575 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kd7pm"] Jan 21 17:41:21 crc kubenswrapper[4823]: I0121 17:41:21.301204 4823 generic.go:334] "Generic (PLEG): container finished" podID="01324a9e-711d-4754-8e1c-4d2ce8ae749a" containerID="71450bec366e2b224f25fba65c681f2bc7117f077710fa06bf5fa832a5621c79" exitCode=0 Jan 21 17:41:21 crc kubenswrapper[4823]: I0121 17:41:21.301348 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" event={"ID":"01324a9e-711d-4754-8e1c-4d2ce8ae749a","Type":"ContainerDied","Data":"71450bec366e2b224f25fba65c681f2bc7117f077710fa06bf5fa832a5621c79"} Jan 21 17:41:21 crc kubenswrapper[4823]: I0121 17:41:21.301711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" event={"ID":"01324a9e-711d-4754-8e1c-4d2ce8ae749a","Type":"ContainerStarted","Data":"a74b7685461bb7bc62d4ff62ca2c61b97efe835ea830bee1d63254b853557df9"} Jan 21 17:41:21 crc kubenswrapper[4823]: I0121 17:41:21.376627 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" path="/var/lib/kubelet/pods/0b62a6b8-398b-429c-b074-9e3db44b8449/volumes" Jan 21 17:41:22 crc kubenswrapper[4823]: I0121 17:41:22.316309 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" event={"ID":"01324a9e-711d-4754-8e1c-4d2ce8ae749a","Type":"ContainerStarted","Data":"5a0ad6b59f9ed3813d531a80beec692b8d5c47bc48f9d96d8bf8f687b6b1dc4d"} Jan 21 17:41:22 crc kubenswrapper[4823]: I0121 17:41:22.318366 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:22 crc kubenswrapper[4823]: I0121 17:41:22.347913 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" podStartSLOduration=3.347895175 podStartE2EDuration="3.347895175s" podCreationTimestamp="2026-01-21 17:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:41:22.338495802 +0000 UTC m=+1483.264626672" watchObservedRunningTime="2026-01-21 17:41:22.347895175 +0000 UTC m=+1483.274026035" Jan 21 17:41:26 crc kubenswrapper[4823]: I0121 17:41:26.444623 4823 scope.go:117] "RemoveContainer" containerID="a8dde4e645aceeb5df0f82e153a8eb1cdc085fce6d6a09783197107af6a51f96" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.075729 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-mt792" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.167999 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zvf4z"] Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.169760 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerName="dnsmasq-dns" containerID="cri-o://84142bc4cf623c951f9ea2268ec94f02527199f77af156cbfdc571d6809c58da" gracePeriod=10 Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.423401 4823 generic.go:334] "Generic (PLEG): container finished" podID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerID="84142bc4cf623c951f9ea2268ec94f02527199f77af156cbfdc571d6809c58da" exitCode=0 Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.423453 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" event={"ID":"92777401-b4fc-4ac9-bd12-50c2be1b6fb9","Type":"ContainerDied","Data":"84142bc4cf623c951f9ea2268ec94f02527199f77af156cbfdc571d6809c58da"} Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.714228 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-config\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839475 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-openstack-edpm-ipam\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839634 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-swift-storage-0\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839658 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dnm2\" (UniqueName: \"kubernetes.io/projected/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-kube-api-access-6dnm2\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839722 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-svc\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-nb\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.839828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-sb\") pod \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\" (UID: \"92777401-b4fc-4ac9-bd12-50c2be1b6fb9\") " Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.859028 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-kube-api-access-6dnm2" (OuterVolumeSpecName: "kube-api-access-6dnm2") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "kube-api-access-6dnm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.894528 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.898348 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.899304 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.904985 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.907488 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-config" (OuterVolumeSpecName: "config") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.908847 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92777401-b4fc-4ac9-bd12-50c2be1b6fb9" (UID: "92777401-b4fc-4ac9-bd12-50c2be1b6fb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942789 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942894 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dnm2\" (UniqueName: \"kubernetes.io/projected/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-kube-api-access-6dnm2\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942913 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942937 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942951 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942965 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:30 crc kubenswrapper[4823]: I0121 17:41:30.942975 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/92777401-b4fc-4ac9-bd12-50c2be1b6fb9-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:41:31 crc kubenswrapper[4823]: I0121 17:41:31.434983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" event={"ID":"92777401-b4fc-4ac9-bd12-50c2be1b6fb9","Type":"ContainerDied","Data":"c83b76bf6ad7f4af6d32f34aae60dd8b031ee6c63eb41ed8987552272be00a53"} Jan 21 17:41:31 crc kubenswrapper[4823]: I0121 17:41:31.435046 4823 scope.go:117] "RemoveContainer" containerID="84142bc4cf623c951f9ea2268ec94f02527199f77af156cbfdc571d6809c58da" Jan 21 17:41:31 crc kubenswrapper[4823]: I0121 17:41:31.435058 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zvf4z" Jan 21 17:41:31 crc kubenswrapper[4823]: I0121 17:41:31.476095 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zvf4z"] Jan 21 17:41:31 crc kubenswrapper[4823]: I0121 17:41:31.476795 4823 scope.go:117] "RemoveContainer" containerID="315ae69658cf6122905d05ad55778b20f0bb1d111c97bf07b6f9debce7143c52" Jan 21 17:41:31 crc kubenswrapper[4823]: I0121 17:41:31.493977 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zvf4z"] Jan 21 17:41:33 crc kubenswrapper[4823]: I0121 17:41:33.356901 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" path="/var/lib/kubelet/pods/92777401-b4fc-4ac9-bd12-50c2be1b6fb9/volumes" Jan 21 17:41:41 crc kubenswrapper[4823]: I0121 17:41:41.538008 4823 generic.go:334] "Generic (PLEG): container finished" podID="b31dcb7b-15e2-4a14-bdab-d2887043e52a" containerID="9544e2706027bb50717847a5171de9a67a6c50e908f5253e2d732f8f4e6b4486" exitCode=0 Jan 21 17:41:41 crc kubenswrapper[4823]: I0121 17:41:41.538511 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b31dcb7b-15e2-4a14-bdab-d2887043e52a","Type":"ContainerDied","Data":"9544e2706027bb50717847a5171de9a67a6c50e908f5253e2d732f8f4e6b4486"} Jan 21 17:41:42 crc kubenswrapper[4823]: I0121 17:41:42.561780 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b31dcb7b-15e2-4a14-bdab-d2887043e52a","Type":"ContainerStarted","Data":"2f82c7996f6b4d05816edd42ee9f9afa53cb5733909511e0dc32f0875eef5ecc"} Jan 21 17:41:42 crc kubenswrapper[4823]: I0121 17:41:42.562305 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 17:41:42 crc kubenswrapper[4823]: I0121 17:41:42.564304 4823 generic.go:334] "Generic (PLEG): container finished" podID="8ad50cd2-2f93-4a56-aa86-8b81e205531e" containerID="403812f0521785852c5de57d25e55008a39d42034ea06b4f39ab33491e8a9bac" exitCode=0 Jan 21 17:41:42 crc kubenswrapper[4823]: I0121 17:41:42.564342 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ad50cd2-2f93-4a56-aa86-8b81e205531e","Type":"ContainerDied","Data":"403812f0521785852c5de57d25e55008a39d42034ea06b4f39ab33491e8a9bac"} Jan 21 17:41:42 crc kubenswrapper[4823]: I0121 17:41:42.616820 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.616803477 podStartE2EDuration="36.616803477s" podCreationTimestamp="2026-01-21 17:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:41:42.59346355 +0000 UTC m=+1503.519594420" watchObservedRunningTime="2026-01-21 17:41:42.616803477 +0000 UTC m=+1503.542934337" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.576170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ad50cd2-2f93-4a56-aa86-8b81e205531e","Type":"ContainerStarted","Data":"878f45273e0fd4d50f8285deb524e5e30f82ede68053b9b6e83fd2f9fc9fb4eb"} Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.576745 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.624418 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.624395556 podStartE2EDuration="36.624395556s" podCreationTimestamp="2026-01-21 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:41:43.600796432 +0000 UTC m=+1504.526927302" watchObservedRunningTime="2026-01-21 17:41:43.624395556 +0000 UTC m=+1504.550526416" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.649642 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5"] Jan 21 17:41:43 crc kubenswrapper[4823]: E0121 17:41:43.650098 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerName="dnsmasq-dns" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.650114 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerName="dnsmasq-dns" Jan 21 17:41:43 crc kubenswrapper[4823]: E0121 17:41:43.650146 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerName="init" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.650153 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerName="init" Jan 21 17:41:43 crc kubenswrapper[4823]: E0121 17:41:43.650167 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerName="init" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.650173 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerName="init" Jan 21 17:41:43 crc kubenswrapper[4823]: E0121 17:41:43.650183 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerName="dnsmasq-dns" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.650188 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerName="dnsmasq-dns" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.650368 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="92777401-b4fc-4ac9-bd12-50c2be1b6fb9" containerName="dnsmasq-dns" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.650384 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b62a6b8-398b-429c-b074-9e3db44b8449" containerName="dnsmasq-dns" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.651046 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.653434 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.653693 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.654619 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.654758 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.665104 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5"] Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.708009 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.708094 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.708136 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76w9\" (UniqueName: \"kubernetes.io/projected/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-kube-api-access-t76w9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.708271 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.810564 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.810681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.810726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.810758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76w9\" (UniqueName: \"kubernetes.io/projected/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-kube-api-access-t76w9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.816011 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.828915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.829001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.831657 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76w9\" (UniqueName: \"kubernetes.io/projected/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-kube-api-access-t76w9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:43 crc kubenswrapper[4823]: I0121 17:41:43.968723 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:41:44 crc kubenswrapper[4823]: I0121 17:41:44.565448 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5"] Jan 21 17:41:44 crc kubenswrapper[4823]: I0121 17:41:44.593504 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" event={"ID":"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b","Type":"ContainerStarted","Data":"60932fff39557ebf2376337acc9ad11a89b41f809bb77dd1156af87eadc590fd"} Jan 21 17:41:45 crc kubenswrapper[4823]: I0121 17:41:45.070733 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:41:45 crc kubenswrapper[4823]: I0121 17:41:45.070844 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:41:56 crc kubenswrapper[4823]: I0121 17:41:56.751056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" event={"ID":"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b","Type":"ContainerStarted","Data":"682e20c7ceafb4b543234a67f6fb971fc07c18223c38f92e1e1b86e08642606a"} Jan 21 17:41:56 crc kubenswrapper[4823]: I0121 17:41:56.771286 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" podStartSLOduration=2.809970727 podStartE2EDuration="13.771258327s" podCreationTimestamp="2026-01-21 17:41:43 +0000 UTC" firstStartedPulling="2026-01-21 17:41:44.580930631 +0000 UTC m=+1505.507061491" lastFinishedPulling="2026-01-21 17:41:55.542218231 +0000 UTC m=+1516.468349091" observedRunningTime="2026-01-21 17:41:56.766721164 +0000 UTC m=+1517.692852024" watchObservedRunningTime="2026-01-21 17:41:56.771258327 +0000 UTC m=+1517.697389187" Jan 21 17:41:56 crc kubenswrapper[4823]: I0121 17:41:56.806814 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 17:41:57 crc kubenswrapper[4823]: I0121 17:41:57.787091 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 17:42:10 crc kubenswrapper[4823]: I0121 17:42:10.912458 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" containerID="682e20c7ceafb4b543234a67f6fb971fc07c18223c38f92e1e1b86e08642606a" exitCode=0 Jan 21 17:42:10 crc kubenswrapper[4823]: I0121 17:42:10.912568 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" event={"ID":"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b","Type":"ContainerDied","Data":"682e20c7ceafb4b543234a67f6fb971fc07c18223c38f92e1e1b86e08642606a"} Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.435886 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.576463 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t76w9\" (UniqueName: \"kubernetes.io/projected/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-kube-api-access-t76w9\") pod \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.576646 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-ssh-key-openstack-edpm-ipam\") pod \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.576688 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-inventory\") pod \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.576757 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-repo-setup-combined-ca-bundle\") pod \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\" (UID: \"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b\") " Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.588217 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" (UID: "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.588259 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-kube-api-access-t76w9" (OuterVolumeSpecName: "kube-api-access-t76w9") pod "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" (UID: "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b"). InnerVolumeSpecName "kube-api-access-t76w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.608308 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" (UID: "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.609243 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-inventory" (OuterVolumeSpecName: "inventory") pod "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" (UID: "d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.679079 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.679113 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t76w9\" (UniqueName: \"kubernetes.io/projected/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-kube-api-access-t76w9\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.679123 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.679132 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.938680 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" event={"ID":"d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b","Type":"ContainerDied","Data":"60932fff39557ebf2376337acc9ad11a89b41f809bb77dd1156af87eadc590fd"} Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.938978 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60932fff39557ebf2376337acc9ad11a89b41f809bb77dd1156af87eadc590fd" Jan 21 17:42:12 crc kubenswrapper[4823]: I0121 17:42:12.938771 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.050902 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl"] Jan 21 17:42:13 crc kubenswrapper[4823]: E0121 17:42:13.051300 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.051318 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.051527 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.052180 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.055516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.055817 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.056131 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.057042 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.068423 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl"] Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.105513 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.105654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk4sm\" (UniqueName: \"kubernetes.io/projected/e40a0e6f-16f4-4050-ad53-1c5678c23a87-kube-api-access-gk4sm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.105731 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.206826 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.206904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk4sm\" (UniqueName: \"kubernetes.io/projected/e40a0e6f-16f4-4050-ad53-1c5678c23a87-kube-api-access-gk4sm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.206938 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.217481 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.217624 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.223621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk4sm\" (UniqueName: \"kubernetes.io/projected/e40a0e6f-16f4-4050-ad53-1c5678c23a87-kube-api-access-gk4sm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nm2tl\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.376401 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.934671 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl"] Jan 21 17:42:13 crc kubenswrapper[4823]: I0121 17:42:13.949921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" event={"ID":"e40a0e6f-16f4-4050-ad53-1c5678c23a87","Type":"ContainerStarted","Data":"02df9296613ed02eb055a22f7dd753769b6fd93a743c9fcb2626a63201dc1554"} Jan 21 17:42:14 crc kubenswrapper[4823]: I0121 17:42:14.966683 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" event={"ID":"e40a0e6f-16f4-4050-ad53-1c5678c23a87","Type":"ContainerStarted","Data":"4adc2bde20844fa662a67a8c8b1d12ba577b96e59f387472d9c10f158f3119dc"} Jan 21 17:42:14 crc kubenswrapper[4823]: I0121 17:42:14.997225 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" podStartSLOduration=1.596222571 podStartE2EDuration="1.997193288s" podCreationTimestamp="2026-01-21 17:42:13 +0000 UTC" firstStartedPulling="2026-01-21 17:42:13.94155767 +0000 UTC m=+1534.867688550" lastFinishedPulling="2026-01-21 17:42:14.342528397 +0000 UTC m=+1535.268659267" observedRunningTime="2026-01-21 17:42:14.98393808 +0000 UTC m=+1535.910068950" watchObservedRunningTime="2026-01-21 17:42:14.997193288 +0000 UTC m=+1535.923324188" Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.071259 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.071323 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.071371 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.073129 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.073198 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" gracePeriod=600 Jan 21 17:42:15 crc kubenswrapper[4823]: E0121 17:42:15.201001 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.980357 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" exitCode=0 Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.980443 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb"} Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.980536 4823 scope.go:117] "RemoveContainer" containerID="e5cf874111542cda3e34240991e3ef5c73b1f1132ce5389832d5612a1548617a" Jan 21 17:42:15 crc kubenswrapper[4823]: I0121 17:42:15.981562 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:42:15 crc kubenswrapper[4823]: E0121 17:42:15.981943 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:42:18 crc kubenswrapper[4823]: I0121 17:42:18.005297 4823 generic.go:334] "Generic (PLEG): container finished" podID="e40a0e6f-16f4-4050-ad53-1c5678c23a87" containerID="4adc2bde20844fa662a67a8c8b1d12ba577b96e59f387472d9c10f158f3119dc" exitCode=0 Jan 21 17:42:18 crc kubenswrapper[4823]: I0121 17:42:18.005434 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" event={"ID":"e40a0e6f-16f4-4050-ad53-1c5678c23a87","Type":"ContainerDied","Data":"4adc2bde20844fa662a67a8c8b1d12ba577b96e59f387472d9c10f158f3119dc"} Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.585530 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.654461 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-ssh-key-openstack-edpm-ipam\") pod \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.654547 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk4sm\" (UniqueName: \"kubernetes.io/projected/e40a0e6f-16f4-4050-ad53-1c5678c23a87-kube-api-access-gk4sm\") pod \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.654646 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-inventory\") pod \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\" (UID: \"e40a0e6f-16f4-4050-ad53-1c5678c23a87\") " Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.669552 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40a0e6f-16f4-4050-ad53-1c5678c23a87-kube-api-access-gk4sm" (OuterVolumeSpecName: "kube-api-access-gk4sm") pod "e40a0e6f-16f4-4050-ad53-1c5678c23a87" (UID: "e40a0e6f-16f4-4050-ad53-1c5678c23a87"). InnerVolumeSpecName "kube-api-access-gk4sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.687526 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e40a0e6f-16f4-4050-ad53-1c5678c23a87" (UID: "e40a0e6f-16f4-4050-ad53-1c5678c23a87"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.714155 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-inventory" (OuterVolumeSpecName: "inventory") pod "e40a0e6f-16f4-4050-ad53-1c5678c23a87" (UID: "e40a0e6f-16f4-4050-ad53-1c5678c23a87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.756782 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.756820 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk4sm\" (UniqueName: \"kubernetes.io/projected/e40a0e6f-16f4-4050-ad53-1c5678c23a87-kube-api-access-gk4sm\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:19 crc kubenswrapper[4823]: I0121 17:42:19.756830 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40a0e6f-16f4-4050-ad53-1c5678c23a87-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.033921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" event={"ID":"e40a0e6f-16f4-4050-ad53-1c5678c23a87","Type":"ContainerDied","Data":"02df9296613ed02eb055a22f7dd753769b6fd93a743c9fcb2626a63201dc1554"} Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.033977 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02df9296613ed02eb055a22f7dd753769b6fd93a743c9fcb2626a63201dc1554" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.034046 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nm2tl" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.129328 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89"] Jan 21 17:42:20 crc kubenswrapper[4823]: E0121 17:42:20.129931 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40a0e6f-16f4-4050-ad53-1c5678c23a87" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.129953 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40a0e6f-16f4-4050-ad53-1c5678c23a87" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.132400 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40a0e6f-16f4-4050-ad53-1c5678c23a87" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.133674 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.136152 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.136769 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.137602 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.141849 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.143815 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89"] Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.166075 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.166131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.166207 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.166342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwc8\" (UniqueName: \"kubernetes.io/projected/6acc67e0-e641-4a80-a9e0-e1373d9de675-kube-api-access-mgwc8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.268014 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.268060 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.268109 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.268134 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwc8\" (UniqueName: \"kubernetes.io/projected/6acc67e0-e641-4a80-a9e0-e1373d9de675-kube-api-access-mgwc8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.275120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.278098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.281026 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.286454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwc8\" (UniqueName: \"kubernetes.io/projected/6acc67e0-e641-4a80-a9e0-e1373d9de675-kube-api-access-mgwc8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-48h89\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:20 crc kubenswrapper[4823]: I0121 17:42:20.461797 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:42:21 crc kubenswrapper[4823]: I0121 17:42:21.114064 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89"] Jan 21 17:42:22 crc kubenswrapper[4823]: I0121 17:42:22.062123 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" event={"ID":"6acc67e0-e641-4a80-a9e0-e1373d9de675","Type":"ContainerStarted","Data":"6bd9d48893eb245b4b218fffe214590e6f5515f6bf930870b031610f9b0f8a4e"} Jan 21 17:42:22 crc kubenswrapper[4823]: I0121 17:42:22.062685 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" event={"ID":"6acc67e0-e641-4a80-a9e0-e1373d9de675","Type":"ContainerStarted","Data":"eb6618fb751d6ca0c1043cc7b811806f017f1c503b10a4f4b2fc451312a7f214"} Jan 21 17:42:22 crc kubenswrapper[4823]: I0121 17:42:22.092160 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" podStartSLOduration=1.52033614 podStartE2EDuration="2.092135091s" podCreationTimestamp="2026-01-21 17:42:20 +0000 UTC" firstStartedPulling="2026-01-21 17:42:21.110125415 +0000 UTC m=+1542.036256275" lastFinishedPulling="2026-01-21 17:42:21.681924366 +0000 UTC m=+1542.608055226" observedRunningTime="2026-01-21 17:42:22.078051543 +0000 UTC m=+1543.004182403" watchObservedRunningTime="2026-01-21 17:42:22.092135091 +0000 UTC m=+1543.018265971" Jan 21 17:42:26 crc kubenswrapper[4823]: I0121 17:42:26.574622 4823 scope.go:117] "RemoveContainer" containerID="de9f78c788c886fabd6aca7b3e715188fa10ba0ba90c90f3b22dd37bfc6050e7" Jan 21 17:42:26 crc kubenswrapper[4823]: I0121 17:42:26.625531 4823 scope.go:117] "RemoveContainer" containerID="faf3fdf785e5f9cf0f48eafcebd0b57c71ee2de9c24b29983eca1c223cd0f097" Jan 21 17:42:26 crc kubenswrapper[4823]: I0121 17:42:26.657965 4823 scope.go:117] "RemoveContainer" containerID="830e567e88aec7e8ac41bd87adef1f7be90d6a4918f6692c669789a28770e331" Jan 21 17:42:29 crc kubenswrapper[4823]: I0121 17:42:29.356594 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:42:29 crc kubenswrapper[4823]: E0121 17:42:29.357286 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:42:29 crc kubenswrapper[4823]: I0121 17:42:29.899280 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sxll"] Jan 21 17:42:29 crc kubenswrapper[4823]: I0121 17:42:29.901248 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:29 crc kubenswrapper[4823]: I0121 17:42:29.918701 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sxll"] Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.080576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsd5m\" (UniqueName: \"kubernetes.io/projected/814aebfd-116a-4e0a-9def-1c41fabbf7df-kube-api-access-lsd5m\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.080743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-utilities\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.080803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-catalog-content\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.183817 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsd5m\" (UniqueName: \"kubernetes.io/projected/814aebfd-116a-4e0a-9def-1c41fabbf7df-kube-api-access-lsd5m\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.183981 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-utilities\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.184033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-catalog-content\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.184635 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-catalog-content\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.198905 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-utilities\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.247294 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsd5m\" (UniqueName: \"kubernetes.io/projected/814aebfd-116a-4e0a-9def-1c41fabbf7df-kube-api-access-lsd5m\") pod \"certified-operators-7sxll\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:30 crc kubenswrapper[4823]: I0121 17:42:30.530472 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:31 crc kubenswrapper[4823]: I0121 17:42:31.107253 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sxll"] Jan 21 17:42:31 crc kubenswrapper[4823]: I0121 17:42:31.178010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerStarted","Data":"45b83835f1722183b7e004e4ec8abb68bfc78cf938812a757dc0142d1a8145eb"} Jan 21 17:42:32 crc kubenswrapper[4823]: I0121 17:42:32.193361 4823 generic.go:334] "Generic (PLEG): container finished" podID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerID="e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697" exitCode=0 Jan 21 17:42:32 crc kubenswrapper[4823]: I0121 17:42:32.193433 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerDied","Data":"e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697"} Jan 21 17:42:34 crc kubenswrapper[4823]: I0121 17:42:34.220722 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerStarted","Data":"1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12"} Jan 21 17:42:35 crc kubenswrapper[4823]: I0121 17:42:35.235728 4823 generic.go:334] "Generic (PLEG): container finished" podID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerID="1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12" exitCode=0 Jan 21 17:42:35 crc kubenswrapper[4823]: I0121 17:42:35.235864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerDied","Data":"1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12"} Jan 21 17:42:37 crc kubenswrapper[4823]: I0121 17:42:37.264249 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerStarted","Data":"297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1"} Jan 21 17:42:37 crc kubenswrapper[4823]: I0121 17:42:37.302067 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sxll" podStartSLOduration=4.012172101 podStartE2EDuration="8.302033203s" podCreationTimestamp="2026-01-21 17:42:29 +0000 UTC" firstStartedPulling="2026-01-21 17:42:32.196675423 +0000 UTC m=+1553.122806303" lastFinishedPulling="2026-01-21 17:42:36.486536545 +0000 UTC m=+1557.412667405" observedRunningTime="2026-01-21 17:42:37.285308129 +0000 UTC m=+1558.211439029" watchObservedRunningTime="2026-01-21 17:42:37.302033203 +0000 UTC m=+1558.228164093" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.169084 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77z8x"] Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.172341 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.203720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-catalog-content\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.203817 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-utilities\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.203837 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zj9n\" (UniqueName: \"kubernetes.io/projected/4fac0162-dd33-429a-bb49-6119202c0b25-kube-api-access-9zj9n\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.207058 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77z8x"] Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.305543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zj9n\" (UniqueName: \"kubernetes.io/projected/4fac0162-dd33-429a-bb49-6119202c0b25-kube-api-access-9zj9n\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.305608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-utilities\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.305772 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-catalog-content\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.306364 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-catalog-content\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.306429 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-utilities\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.331060 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zj9n\" (UniqueName: \"kubernetes.io/projected/4fac0162-dd33-429a-bb49-6119202c0b25-kube-api-access-9zj9n\") pod \"community-operators-77z8x\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.497934 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.531560 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.533488 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.617120 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:40 crc kubenswrapper[4823]: I0121 17:42:40.985871 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77z8x"] Jan 21 17:42:40 crc kubenswrapper[4823]: W0121 17:42:40.987403 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac0162_dd33_429a_bb49_6119202c0b25.slice/crio-b08eb6352f7f1134425bfaa70dec0092ce963f9fb8bce0b5e469f2422e30e37d WatchSource:0}: Error finding container b08eb6352f7f1134425bfaa70dec0092ce963f9fb8bce0b5e469f2422e30e37d: Status 404 returned error can't find the container with id b08eb6352f7f1134425bfaa70dec0092ce963f9fb8bce0b5e469f2422e30e37d Jan 21 17:42:41 crc kubenswrapper[4823]: I0121 17:42:41.305058 4823 generic.go:334] "Generic (PLEG): container finished" podID="4fac0162-dd33-429a-bb49-6119202c0b25" containerID="5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7" exitCode=0 Jan 21 17:42:41 crc kubenswrapper[4823]: I0121 17:42:41.305116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerDied","Data":"5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7"} Jan 21 17:42:41 crc kubenswrapper[4823]: I0121 17:42:41.305979 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerStarted","Data":"b08eb6352f7f1134425bfaa70dec0092ce963f9fb8bce0b5e469f2422e30e37d"} Jan 21 17:42:41 crc kubenswrapper[4823]: I0121 17:42:41.307166 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:42:41 crc kubenswrapper[4823]: I0121 17:42:41.360045 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:42 crc kubenswrapper[4823]: I0121 17:42:42.330677 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerStarted","Data":"ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf"} Jan 21 17:42:42 crc kubenswrapper[4823]: I0121 17:42:42.347450 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:42:42 crc kubenswrapper[4823]: E0121 17:42:42.347777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:42:42 crc kubenswrapper[4823]: I0121 17:42:42.951178 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sxll"] Jan 21 17:42:43 crc kubenswrapper[4823]: I0121 17:42:43.340420 4823 generic.go:334] "Generic (PLEG): container finished" podID="4fac0162-dd33-429a-bb49-6119202c0b25" containerID="ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf" exitCode=0 Jan 21 17:42:43 crc kubenswrapper[4823]: I0121 17:42:43.341663 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerDied","Data":"ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf"} Jan 21 17:42:44 crc kubenswrapper[4823]: I0121 17:42:44.351919 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sxll" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="registry-server" containerID="cri-o://297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1" gracePeriod=2 Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.364008 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.371444 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerStarted","Data":"ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767"} Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.374612 4823 generic.go:334] "Generic (PLEG): container finished" podID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerID="297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1" exitCode=0 Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.374651 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerDied","Data":"297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1"} Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.374674 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxll" event={"ID":"814aebfd-116a-4e0a-9def-1c41fabbf7df","Type":"ContainerDied","Data":"45b83835f1722183b7e004e4ec8abb68bfc78cf938812a757dc0142d1a8145eb"} Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.374691 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxll" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.374697 4823 scope.go:117] "RemoveContainer" containerID="297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.409263 4823 scope.go:117] "RemoveContainer" containerID="1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.422252 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsd5m\" (UniqueName: \"kubernetes.io/projected/814aebfd-116a-4e0a-9def-1c41fabbf7df-kube-api-access-lsd5m\") pod \"814aebfd-116a-4e0a-9def-1c41fabbf7df\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.422410 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-utilities\") pod \"814aebfd-116a-4e0a-9def-1c41fabbf7df\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.422575 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-catalog-content\") pod \"814aebfd-116a-4e0a-9def-1c41fabbf7df\" (UID: \"814aebfd-116a-4e0a-9def-1c41fabbf7df\") " Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.424575 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-utilities" (OuterVolumeSpecName: "utilities") pod "814aebfd-116a-4e0a-9def-1c41fabbf7df" (UID: "814aebfd-116a-4e0a-9def-1c41fabbf7df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.432072 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814aebfd-116a-4e0a-9def-1c41fabbf7df-kube-api-access-lsd5m" (OuterVolumeSpecName: "kube-api-access-lsd5m") pod "814aebfd-116a-4e0a-9def-1c41fabbf7df" (UID: "814aebfd-116a-4e0a-9def-1c41fabbf7df"). InnerVolumeSpecName "kube-api-access-lsd5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.435362 4823 scope.go:117] "RemoveContainer" containerID="e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.437206 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77z8x" podStartSLOduration=2.107648398 podStartE2EDuration="5.437181s" podCreationTimestamp="2026-01-21 17:42:40 +0000 UTC" firstStartedPulling="2026-01-21 17:42:41.306978478 +0000 UTC m=+1562.233109338" lastFinishedPulling="2026-01-21 17:42:44.63651108 +0000 UTC m=+1565.562641940" observedRunningTime="2026-01-21 17:42:45.40683773 +0000 UTC m=+1566.332968610" watchObservedRunningTime="2026-01-21 17:42:45.437181 +0000 UTC m=+1566.363311880" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.485787 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "814aebfd-116a-4e0a-9def-1c41fabbf7df" (UID: "814aebfd-116a-4e0a-9def-1c41fabbf7df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.524770 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.524831 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814aebfd-116a-4e0a-9def-1c41fabbf7df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.524874 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsd5m\" (UniqueName: \"kubernetes.io/projected/814aebfd-116a-4e0a-9def-1c41fabbf7df-kube-api-access-lsd5m\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.546936 4823 scope.go:117] "RemoveContainer" containerID="297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1" Jan 21 17:42:45 crc kubenswrapper[4823]: E0121 17:42:45.548435 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1\": container with ID starting with 297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1 not found: ID does not exist" containerID="297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.548500 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1"} err="failed to get container status \"297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1\": rpc error: code = NotFound desc = could not find container \"297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1\": container with ID starting with 297ac6343ee80631757a3d7b7fb65227ed0ef73d48fbacbda056bc4d17a8adb1 not found: ID does not exist" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.548532 4823 scope.go:117] "RemoveContainer" containerID="1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12" Jan 21 17:42:45 crc kubenswrapper[4823]: E0121 17:42:45.549577 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12\": container with ID starting with 1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12 not found: ID does not exist" containerID="1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.549634 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12"} err="failed to get container status \"1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12\": rpc error: code = NotFound desc = could not find container \"1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12\": container with ID starting with 1ff64aaea618ccce739e4cf5e35169cef39b28b0371069575ee50c2b4b7c2a12 not found: ID does not exist" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.549666 4823 scope.go:117] "RemoveContainer" containerID="e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697" Jan 21 17:42:45 crc kubenswrapper[4823]: E0121 17:42:45.550122 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697\": container with ID starting with e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697 not found: ID does not exist" containerID="e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.550161 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697"} err="failed to get container status \"e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697\": rpc error: code = NotFound desc = could not find container \"e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697\": container with ID starting with e68300187068cbc7240261d5cbd82667d50eb1405a9533c24859700452c66697 not found: ID does not exist" Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.732397 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sxll"] Jan 21 17:42:45 crc kubenswrapper[4823]: I0121 17:42:45.744548 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sxll"] Jan 21 17:42:47 crc kubenswrapper[4823]: I0121 17:42:47.357234 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" path="/var/lib/kubelet/pods/814aebfd-116a-4e0a-9def-1c41fabbf7df/volumes" Jan 21 17:42:50 crc kubenswrapper[4823]: I0121 17:42:50.499208 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:50 crc kubenswrapper[4823]: I0121 17:42:50.499682 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:50 crc kubenswrapper[4823]: I0121 17:42:50.554830 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:51 crc kubenswrapper[4823]: I0121 17:42:51.489995 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:51 crc kubenswrapper[4823]: I0121 17:42:51.549986 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77z8x"] Jan 21 17:42:53 crc kubenswrapper[4823]: I0121 17:42:53.454909 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77z8x" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="registry-server" containerID="cri-o://ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767" gracePeriod=2 Jan 21 17:42:53 crc kubenswrapper[4823]: I0121 17:42:53.967691 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.116170 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-utilities\") pod \"4fac0162-dd33-429a-bb49-6119202c0b25\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.116371 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-catalog-content\") pod \"4fac0162-dd33-429a-bb49-6119202c0b25\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.116479 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zj9n\" (UniqueName: \"kubernetes.io/projected/4fac0162-dd33-429a-bb49-6119202c0b25-kube-api-access-9zj9n\") pod \"4fac0162-dd33-429a-bb49-6119202c0b25\" (UID: \"4fac0162-dd33-429a-bb49-6119202c0b25\") " Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.117059 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-utilities" (OuterVolumeSpecName: "utilities") pod "4fac0162-dd33-429a-bb49-6119202c0b25" (UID: "4fac0162-dd33-429a-bb49-6119202c0b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.122110 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fac0162-dd33-429a-bb49-6119202c0b25-kube-api-access-9zj9n" (OuterVolumeSpecName: "kube-api-access-9zj9n") pod "4fac0162-dd33-429a-bb49-6119202c0b25" (UID: "4fac0162-dd33-429a-bb49-6119202c0b25"). InnerVolumeSpecName "kube-api-access-9zj9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.169316 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fac0162-dd33-429a-bb49-6119202c0b25" (UID: "4fac0162-dd33-429a-bb49-6119202c0b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.220511 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zj9n\" (UniqueName: \"kubernetes.io/projected/4fac0162-dd33-429a-bb49-6119202c0b25-kube-api-access-9zj9n\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.220570 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.220586 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fac0162-dd33-429a-bb49-6119202c0b25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.343596 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:42:54 crc kubenswrapper[4823]: E0121 17:42:54.344036 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.465830 4823 generic.go:334] "Generic (PLEG): container finished" podID="4fac0162-dd33-429a-bb49-6119202c0b25" containerID="ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767" exitCode=0 Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.465874 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerDied","Data":"ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767"} Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.465933 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77z8x" event={"ID":"4fac0162-dd33-429a-bb49-6119202c0b25","Type":"ContainerDied","Data":"b08eb6352f7f1134425bfaa70dec0092ce963f9fb8bce0b5e469f2422e30e37d"} Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.465949 4823 scope.go:117] "RemoveContainer" containerID="ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.466996 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77z8x" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.493276 4823 scope.go:117] "RemoveContainer" containerID="ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.500926 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77z8x"] Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.512547 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77z8x"] Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.516327 4823 scope.go:117] "RemoveContainer" containerID="5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.578134 4823 scope.go:117] "RemoveContainer" containerID="ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767" Jan 21 17:42:54 crc kubenswrapper[4823]: E0121 17:42:54.578707 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767\": container with ID starting with ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767 not found: ID does not exist" containerID="ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.578763 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767"} err="failed to get container status \"ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767\": rpc error: code = NotFound desc = could not find container \"ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767\": container with ID starting with ee7904b54f347e3372e09bb1bc36bc0bce38cbf695e54b34df62268c9dcc0767 not found: ID does not exist" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.578793 4823 scope.go:117] "RemoveContainer" containerID="ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf" Jan 21 17:42:54 crc kubenswrapper[4823]: E0121 17:42:54.579217 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf\": container with ID starting with ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf not found: ID does not exist" containerID="ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.579244 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf"} err="failed to get container status \"ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf\": rpc error: code = NotFound desc = could not find container \"ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf\": container with ID starting with ee70ff498fad21445c49405fcf6f4f63300b8597ed219a55de9caf4ab8c0cdaf not found: ID does not exist" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.579260 4823 scope.go:117] "RemoveContainer" containerID="5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7" Jan 21 17:42:54 crc kubenswrapper[4823]: E0121 17:42:54.579615 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7\": container with ID starting with 5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7 not found: ID does not exist" containerID="5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7" Jan 21 17:42:54 crc kubenswrapper[4823]: I0121 17:42:54.579637 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7"} err="failed to get container status \"5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7\": rpc error: code = NotFound desc = could not find container \"5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7\": container with ID starting with 5b012d2ceb894faade5a1943670610115b5c905a720b564b4f7e71c6bb2f62b7 not found: ID does not exist" Jan 21 17:42:55 crc kubenswrapper[4823]: I0121 17:42:55.367195 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" path="/var/lib/kubelet/pods/4fac0162-dd33-429a-bb49-6119202c0b25/volumes" Jan 21 17:43:07 crc kubenswrapper[4823]: I0121 17:43:07.344890 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:43:07 crc kubenswrapper[4823]: E0121 17:43:07.345666 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.345748 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fz22c"] Jan 21 17:43:12 crc kubenswrapper[4823]: E0121 17:43:12.347511 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="registry-server" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.347532 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="registry-server" Jan 21 17:43:12 crc kubenswrapper[4823]: E0121 17:43:12.347553 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="extract-content" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.347561 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="extract-content" Jan 21 17:43:12 crc kubenswrapper[4823]: E0121 17:43:12.347571 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="registry-server" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.347579 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="registry-server" Jan 21 17:43:12 crc kubenswrapper[4823]: E0121 17:43:12.347602 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="extract-utilities" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.347611 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="extract-utilities" Jan 21 17:43:12 crc kubenswrapper[4823]: E0121 17:43:12.347641 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="extract-utilities" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.347648 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="extract-utilities" Jan 21 17:43:12 crc kubenswrapper[4823]: E0121 17:43:12.347670 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="extract-content" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.347677 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="extract-content" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.348085 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="814aebfd-116a-4e0a-9def-1c41fabbf7df" containerName="registry-server" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.348104 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fac0162-dd33-429a-bb49-6119202c0b25" containerName="registry-server" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.350035 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.358328 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz22c"] Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.503091 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-utilities\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.503428 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-catalog-content\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.503734 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzbm\" (UniqueName: \"kubernetes.io/projected/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-kube-api-access-vfzbm\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.606298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzbm\" (UniqueName: \"kubernetes.io/projected/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-kube-api-access-vfzbm\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.606386 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-utilities\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.606966 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-utilities\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.606450 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-catalog-content\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.607180 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-catalog-content\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.642464 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzbm\" (UniqueName: \"kubernetes.io/projected/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-kube-api-access-vfzbm\") pod \"redhat-marketplace-fz22c\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.674356 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:12 crc kubenswrapper[4823]: I0121 17:43:12.949200 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz22c"] Jan 21 17:43:13 crc kubenswrapper[4823]: I0121 17:43:13.659769 4823 generic.go:334] "Generic (PLEG): container finished" podID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerID="d88aed76598e44ec8e412e284b1dee3e45b996ca3c25d07708269ddddc9a988b" exitCode=0 Jan 21 17:43:13 crc kubenswrapper[4823]: I0121 17:43:13.659866 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerDied","Data":"d88aed76598e44ec8e412e284b1dee3e45b996ca3c25d07708269ddddc9a988b"} Jan 21 17:43:13 crc kubenswrapper[4823]: I0121 17:43:13.660061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerStarted","Data":"2f1da35c3e61c154a3f4722bdcc8c88dbfa80c1b4a3f0d0aee2d31e8d7f87044"} Jan 21 17:43:15 crc kubenswrapper[4823]: I0121 17:43:15.701664 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerStarted","Data":"e4df4950c0b280db4934d9f67f37f9aa163264eb640516cc6bae80f5afffccee"} Jan 21 17:43:16 crc kubenswrapper[4823]: I0121 17:43:16.716229 4823 generic.go:334] "Generic (PLEG): container finished" podID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerID="e4df4950c0b280db4934d9f67f37f9aa163264eb640516cc6bae80f5afffccee" exitCode=0 Jan 21 17:43:16 crc kubenswrapper[4823]: I0121 17:43:16.716313 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerDied","Data":"e4df4950c0b280db4934d9f67f37f9aa163264eb640516cc6bae80f5afffccee"} Jan 21 17:43:18 crc kubenswrapper[4823]: I0121 17:43:18.740470 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerStarted","Data":"cb2b7d0939529af4b782646dd0961f0a26c13c0b36f8671916b34cf5dd0c2529"} Jan 21 17:43:18 crc kubenswrapper[4823]: I0121 17:43:18.774358 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fz22c" podStartSLOduration=2.925325925 podStartE2EDuration="6.774328873s" podCreationTimestamp="2026-01-21 17:43:12 +0000 UTC" firstStartedPulling="2026-01-21 17:43:13.661835518 +0000 UTC m=+1594.587966378" lastFinishedPulling="2026-01-21 17:43:17.510838466 +0000 UTC m=+1598.436969326" observedRunningTime="2026-01-21 17:43:18.763787233 +0000 UTC m=+1599.689918093" watchObservedRunningTime="2026-01-21 17:43:18.774328873 +0000 UTC m=+1599.700459733" Jan 21 17:43:21 crc kubenswrapper[4823]: I0121 17:43:21.344298 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:43:21 crc kubenswrapper[4823]: E0121 17:43:21.344815 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:43:22 crc kubenswrapper[4823]: I0121 17:43:22.675825 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:22 crc kubenswrapper[4823]: I0121 17:43:22.675972 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:22 crc kubenswrapper[4823]: I0121 17:43:22.728574 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:22 crc kubenswrapper[4823]: I0121 17:43:22.831155 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:22 crc kubenswrapper[4823]: I0121 17:43:22.969193 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz22c"] Jan 21 17:43:24 crc kubenswrapper[4823]: I0121 17:43:24.806443 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fz22c" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="registry-server" containerID="cri-o://cb2b7d0939529af4b782646dd0961f0a26c13c0b36f8671916b34cf5dd0c2529" gracePeriod=2 Jan 21 17:43:25 crc kubenswrapper[4823]: I0121 17:43:25.822154 4823 generic.go:334] "Generic (PLEG): container finished" podID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerID="cb2b7d0939529af4b782646dd0961f0a26c13c0b36f8671916b34cf5dd0c2529" exitCode=0 Jan 21 17:43:25 crc kubenswrapper[4823]: I0121 17:43:25.822382 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerDied","Data":"cb2b7d0939529af4b782646dd0961f0a26c13c0b36f8671916b34cf5dd0c2529"} Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.042414 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.215736 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-utilities\") pod \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.216212 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-catalog-content\") pod \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.216285 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfzbm\" (UniqueName: \"kubernetes.io/projected/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-kube-api-access-vfzbm\") pod \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\" (UID: \"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3\") " Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.218531 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-utilities" (OuterVolumeSpecName: "utilities") pod "9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" (UID: "9584dc5d-e8ad-4a77-9aa7-1a81b74beff3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.223077 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-kube-api-access-vfzbm" (OuterVolumeSpecName: "kube-api-access-vfzbm") pod "9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" (UID: "9584dc5d-e8ad-4a77-9aa7-1a81b74beff3"). InnerVolumeSpecName "kube-api-access-vfzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.242879 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" (UID: "9584dc5d-e8ad-4a77-9aa7-1a81b74beff3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.318446 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfzbm\" (UniqueName: \"kubernetes.io/projected/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-kube-api-access-vfzbm\") on node \"crc\" DevicePath \"\"" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.318500 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.318518 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.852520 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz22c" event={"ID":"9584dc5d-e8ad-4a77-9aa7-1a81b74beff3","Type":"ContainerDied","Data":"2f1da35c3e61c154a3f4722bdcc8c88dbfa80c1b4a3f0d0aee2d31e8d7f87044"} Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.852577 4823 scope.go:117] "RemoveContainer" containerID="cb2b7d0939529af4b782646dd0961f0a26c13c0b36f8671916b34cf5dd0c2529" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.852586 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz22c" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.882986 4823 scope.go:117] "RemoveContainer" containerID="e4df4950c0b280db4934d9f67f37f9aa163264eb640516cc6bae80f5afffccee" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.895988 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz22c"] Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.905147 4823 scope.go:117] "RemoveContainer" containerID="d88aed76598e44ec8e412e284b1dee3e45b996ca3c25d07708269ddddc9a988b" Jan 21 17:43:26 crc kubenswrapper[4823]: I0121 17:43:26.907194 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz22c"] Jan 21 17:43:27 crc kubenswrapper[4823]: I0121 17:43:27.356130 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" path="/var/lib/kubelet/pods/9584dc5d-e8ad-4a77-9aa7-1a81b74beff3/volumes" Jan 21 17:43:35 crc kubenswrapper[4823]: I0121 17:43:35.344257 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:43:35 crc kubenswrapper[4823]: E0121 17:43:35.345005 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:43:49 crc kubenswrapper[4823]: I0121 17:43:49.351216 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:43:49 crc kubenswrapper[4823]: E0121 17:43:49.352162 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:44:02 crc kubenswrapper[4823]: I0121 17:44:02.343917 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:44:02 crc kubenswrapper[4823]: E0121 17:44:02.344700 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:44:17 crc kubenswrapper[4823]: I0121 17:44:17.344098 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:44:17 crc kubenswrapper[4823]: E0121 17:44:17.346272 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:44:26 crc kubenswrapper[4823]: I0121 17:44:26.854967 4823 scope.go:117] "RemoveContainer" containerID="c89ca59243fdfb627d17178647127f44d37129767c8cc6f577297bc0e41744aa" Jan 21 17:44:26 crc kubenswrapper[4823]: I0121 17:44:26.881347 4823 scope.go:117] "RemoveContainer" containerID="2b3af23d6abbde90f3fa65bd663e913d5eadd6edd2da71b26b143190f993db32" Jan 21 17:44:28 crc kubenswrapper[4823]: I0121 17:44:28.344068 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:44:28 crc kubenswrapper[4823]: E0121 17:44:28.344639 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:44:42 crc kubenswrapper[4823]: I0121 17:44:42.343708 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:44:42 crc kubenswrapper[4823]: E0121 17:44:42.344490 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:44:54 crc kubenswrapper[4823]: I0121 17:44:54.343816 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:44:54 crc kubenswrapper[4823]: E0121 17:44:54.345300 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.169626 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f"] Jan 21 17:45:00 crc kubenswrapper[4823]: E0121 17:45:00.170723 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="extract-content" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.170759 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="extract-content" Jan 21 17:45:00 crc kubenswrapper[4823]: E0121 17:45:00.170775 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="registry-server" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.170783 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="registry-server" Jan 21 17:45:00 crc kubenswrapper[4823]: E0121 17:45:00.170797 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="extract-utilities" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.170805 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="extract-utilities" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.171044 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9584dc5d-e8ad-4a77-9aa7-1a81b74beff3" containerName="registry-server" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.171923 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.174249 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.175943 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.197817 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f"] Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.284635 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fdg\" (UniqueName: \"kubernetes.io/projected/77c50ca9-47c8-493c-9e9b-28dedd84c304-kube-api-access-57fdg\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.284879 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c50ca9-47c8-493c-9e9b-28dedd84c304-secret-volume\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.285254 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c50ca9-47c8-493c-9e9b-28dedd84c304-config-volume\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.387155 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c50ca9-47c8-493c-9e9b-28dedd84c304-config-volume\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.387727 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fdg\" (UniqueName: \"kubernetes.io/projected/77c50ca9-47c8-493c-9e9b-28dedd84c304-kube-api-access-57fdg\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.387997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c50ca9-47c8-493c-9e9b-28dedd84c304-secret-volume\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.388195 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c50ca9-47c8-493c-9e9b-28dedd84c304-config-volume\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.394605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c50ca9-47c8-493c-9e9b-28dedd84c304-secret-volume\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.405301 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fdg\" (UniqueName: \"kubernetes.io/projected/77c50ca9-47c8-493c-9e9b-28dedd84c304-kube-api-access-57fdg\") pod \"collect-profiles-29483625-zvw5f\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.501641 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:00 crc kubenswrapper[4823]: I0121 17:45:00.970682 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f"] Jan 21 17:45:01 crc kubenswrapper[4823]: I0121 17:45:01.006145 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" event={"ID":"77c50ca9-47c8-493c-9e9b-28dedd84c304","Type":"ContainerStarted","Data":"2a748a903451007bb3d34c034dde28603b6f3e5ebe68e99646dc140bdd84127e"} Jan 21 17:45:02 crc kubenswrapper[4823]: I0121 17:45:02.020314 4823 generic.go:334] "Generic (PLEG): container finished" podID="77c50ca9-47c8-493c-9e9b-28dedd84c304" containerID="29f8e9600dfb7fe8337bd1e0282219c2cfea4a9de868b31f19a1d7d54c67cfdc" exitCode=0 Jan 21 17:45:02 crc kubenswrapper[4823]: I0121 17:45:02.020411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" event={"ID":"77c50ca9-47c8-493c-9e9b-28dedd84c304","Type":"ContainerDied","Data":"29f8e9600dfb7fe8337bd1e0282219c2cfea4a9de868b31f19a1d7d54c67cfdc"} Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.373064 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.448482 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c50ca9-47c8-493c-9e9b-28dedd84c304-config-volume\") pod \"77c50ca9-47c8-493c-9e9b-28dedd84c304\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.448633 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c50ca9-47c8-493c-9e9b-28dedd84c304-secret-volume\") pod \"77c50ca9-47c8-493c-9e9b-28dedd84c304\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.448664 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fdg\" (UniqueName: \"kubernetes.io/projected/77c50ca9-47c8-493c-9e9b-28dedd84c304-kube-api-access-57fdg\") pod \"77c50ca9-47c8-493c-9e9b-28dedd84c304\" (UID: \"77c50ca9-47c8-493c-9e9b-28dedd84c304\") " Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.449132 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c50ca9-47c8-493c-9e9b-28dedd84c304-config-volume" (OuterVolumeSpecName: "config-volume") pod "77c50ca9-47c8-493c-9e9b-28dedd84c304" (UID: "77c50ca9-47c8-493c-9e9b-28dedd84c304"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.449999 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c50ca9-47c8-493c-9e9b-28dedd84c304-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.455734 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c50ca9-47c8-493c-9e9b-28dedd84c304-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77c50ca9-47c8-493c-9e9b-28dedd84c304" (UID: "77c50ca9-47c8-493c-9e9b-28dedd84c304"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.456486 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c50ca9-47c8-493c-9e9b-28dedd84c304-kube-api-access-57fdg" (OuterVolumeSpecName: "kube-api-access-57fdg") pod "77c50ca9-47c8-493c-9e9b-28dedd84c304" (UID: "77c50ca9-47c8-493c-9e9b-28dedd84c304"). InnerVolumeSpecName "kube-api-access-57fdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.552083 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c50ca9-47c8-493c-9e9b-28dedd84c304-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:45:03 crc kubenswrapper[4823]: I0121 17:45:03.552145 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fdg\" (UniqueName: \"kubernetes.io/projected/77c50ca9-47c8-493c-9e9b-28dedd84c304-kube-api-access-57fdg\") on node \"crc\" DevicePath \"\"" Jan 21 17:45:04 crc kubenswrapper[4823]: I0121 17:45:04.039062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" event={"ID":"77c50ca9-47c8-493c-9e9b-28dedd84c304","Type":"ContainerDied","Data":"2a748a903451007bb3d34c034dde28603b6f3e5ebe68e99646dc140bdd84127e"} Jan 21 17:45:04 crc kubenswrapper[4823]: I0121 17:45:04.039108 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a748a903451007bb3d34c034dde28603b6f3e5ebe68e99646dc140bdd84127e" Jan 21 17:45:04 crc kubenswrapper[4823]: I0121 17:45:04.039124 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f" Jan 21 17:45:06 crc kubenswrapper[4823]: I0121 17:45:06.343563 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:45:06 crc kubenswrapper[4823]: E0121 17:45:06.344336 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:45:08 crc kubenswrapper[4823]: I0121 17:45:08.058117 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8wgkf"] Jan 21 17:45:08 crc kubenswrapper[4823]: I0121 17:45:08.067729 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8070-account-create-update-mpj6v"] Jan 21 17:45:08 crc kubenswrapper[4823]: I0121 17:45:08.081256 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8wgkf"] Jan 21 17:45:08 crc kubenswrapper[4823]: I0121 17:45:08.094305 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8070-account-create-update-mpj6v"] Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.035155 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5jn7l"] Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.049683 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c720-account-create-update-ph4fq"] Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.062965 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c720-account-create-update-ph4fq"] Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.074538 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5jn7l"] Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.359433 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fea18d0-be48-4202-b830-80f527487892" path="/var/lib/kubelet/pods/5fea18d0-be48-4202-b830-80f527487892/volumes" Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.360950 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67c073d-2bdd-4bcc-a73e-9eed04a74f17" path="/var/lib/kubelet/pods/c67c073d-2bdd-4bcc-a73e-9eed04a74f17/volumes" Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.362027 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a94bd0-3d15-4c8d-8974-af4f7772cc74" path="/var/lib/kubelet/pods/e6a94bd0-3d15-4c8d-8974-af4f7772cc74/volumes" Jan 21 17:45:09 crc kubenswrapper[4823]: I0121 17:45:09.363285 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8417175-6b47-4fd9-961c-e422480b3353" path="/var/lib/kubelet/pods/e8417175-6b47-4fd9-961c-e422480b3353/volumes" Jan 21 17:45:12 crc kubenswrapper[4823]: I0121 17:45:12.036018 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-q27fp"] Jan 21 17:45:12 crc kubenswrapper[4823]: I0121 17:45:12.048041 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-q27fp"] Jan 21 17:45:13 crc kubenswrapper[4823]: I0121 17:45:13.039938 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-e2a3-account-create-update-mx6xz"] Jan 21 17:45:13 crc kubenswrapper[4823]: I0121 17:45:13.051539 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-e2a3-account-create-update-mx6xz"] Jan 21 17:45:13 crc kubenswrapper[4823]: I0121 17:45:13.356903 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a901c8f-bafb-433a-8de8-dd20b97f927d" path="/var/lib/kubelet/pods/2a901c8f-bafb-433a-8de8-dd20b97f927d/volumes" Jan 21 17:45:13 crc kubenswrapper[4823]: I0121 17:45:13.358610 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644c6573-fd38-482c-8cef-409affff3581" path="/var/lib/kubelet/pods/644c6573-fd38-482c-8cef-409affff3581/volumes" Jan 21 17:45:16 crc kubenswrapper[4823]: I0121 17:45:16.032833 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m2t5j"] Jan 21 17:45:16 crc kubenswrapper[4823]: I0121 17:45:16.043381 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5f55-account-create-update-t6t7d"] Jan 21 17:45:16 crc kubenswrapper[4823]: I0121 17:45:16.051892 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5f55-account-create-update-t6t7d"] Jan 21 17:45:16 crc kubenswrapper[4823]: I0121 17:45:16.061246 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m2t5j"] Jan 21 17:45:17 crc kubenswrapper[4823]: I0121 17:45:17.369136 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a" path="/var/lib/kubelet/pods/1f3d6f07-c8ed-4bcd-b03c-c1c371f1af1a/volumes" Jan 21 17:45:17 crc kubenswrapper[4823]: I0121 17:45:17.370566 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d755b5dc-e412-4301-a237-a0228a9378f4" path="/var/lib/kubelet/pods/d755b5dc-e412-4301-a237-a0228a9378f4/volumes" Jan 21 17:45:18 crc kubenswrapper[4823]: I0121 17:45:18.344004 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:45:18 crc kubenswrapper[4823]: E0121 17:45:18.344546 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:45:26 crc kubenswrapper[4823]: I0121 17:45:26.953470 4823 scope.go:117] "RemoveContainer" containerID="eaefc5586c43d70f4e94b669b4a9b80639b39c77fb44e8ef98c8ea1c6e88fa77" Jan 21 17:45:26 crc kubenswrapper[4823]: I0121 17:45:26.981081 4823 scope.go:117] "RemoveContainer" containerID="b2d00b434edff5ceee405bfe6e0b2b9eed8c1bd40e67aafe523cd0a828e78761" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.012278 4823 scope.go:117] "RemoveContainer" containerID="7276526551e6347d3ed284b69f3f018bd6188f5a860bbe0792cad5b623e1522a" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.052451 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-pmgrr"] Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.070879 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-pmgrr"] Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.074609 4823 scope.go:117] "RemoveContainer" containerID="97ce6be46288dcd33a81e5bb9ff4651f313862b442b113096dc8ef87e8c7f9a1" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.126659 4823 scope.go:117] "RemoveContainer" containerID="746440c09887bb5adec32d9050cd855d3d5110a1f1276d2da0add51219afb108" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.170770 4823 scope.go:117] "RemoveContainer" containerID="d3322b6d8c50f9526b2340e0d354f64ad043daccd18de494b444b0edce175e17" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.221974 4823 scope.go:117] "RemoveContainer" containerID="8ddd086bba06196dc787b5a909e279cd771c54066edc3ab82d746a642af13bb7" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.252547 4823 scope.go:117] "RemoveContainer" containerID="b9189767fdf8874b86bb446add4d21e2113dedbdffe8c0d0c4d6cdc1cdefb7e8" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.296681 4823 scope.go:117] "RemoveContainer" containerID="29bb2602b9fda9898bf18f9ed69381733427a046c385cfd662b682bde67c5ef3" Jan 21 17:45:27 crc kubenswrapper[4823]: I0121 17:45:27.359179 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8dc75f0-c0f1-456e-912b-221ee7b6697c" path="/var/lib/kubelet/pods/e8dc75f0-c0f1-456e-912b-221ee7b6697c/volumes" Jan 21 17:45:28 crc kubenswrapper[4823]: I0121 17:45:28.044217 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mhwlz"] Jan 21 17:45:28 crc kubenswrapper[4823]: I0121 17:45:28.060693 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mhwlz"] Jan 21 17:45:28 crc kubenswrapper[4823]: I0121 17:45:28.071829 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fv9fn"] Jan 21 17:45:28 crc kubenswrapper[4823]: I0121 17:45:28.083760 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7899-account-create-update-wzjrc"] Jan 21 17:45:28 crc kubenswrapper[4823]: I0121 17:45:28.095478 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fv9fn"] Jan 21 17:45:28 crc kubenswrapper[4823]: I0121 17:45:28.104516 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7899-account-create-update-wzjrc"] Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.034274 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-504c-account-create-update-xlrsl"] Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.043188 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-504c-account-create-update-xlrsl"] Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.050642 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1561-account-create-update-lmxjf"] Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.059970 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1561-account-create-update-lmxjf"] Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.356261 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20dae312-edd5-4ad7-92ca-6c61465c9e5a" path="/var/lib/kubelet/pods/20dae312-edd5-4ad7-92ca-6c61465c9e5a/volumes" Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.357443 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4212f0db-bdbb-4cb0-87c0-d959e974c5a0" path="/var/lib/kubelet/pods/4212f0db-bdbb-4cb0-87c0-d959e974c5a0/volumes" Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.358745 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6562d200-2356-4f66-905c-27ac16f6bd68" path="/var/lib/kubelet/pods/6562d200-2356-4f66-905c-27ac16f6bd68/volumes" Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.359724 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ece2f30-4be3-4661-a2c8-bb5415023a2d" path="/var/lib/kubelet/pods/6ece2f30-4be3-4661-a2c8-bb5415023a2d/volumes" Jan 21 17:45:29 crc kubenswrapper[4823]: I0121 17:45:29.361492 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfacc23e-26c3-4308-ab42-371932e13246" path="/var/lib/kubelet/pods/dfacc23e-26c3-4308-ab42-371932e13246/volumes" Jan 21 17:45:33 crc kubenswrapper[4823]: I0121 17:45:33.344183 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:45:33 crc kubenswrapper[4823]: E0121 17:45:33.344721 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:45:44 crc kubenswrapper[4823]: I0121 17:45:44.343478 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:45:44 crc kubenswrapper[4823]: E0121 17:45:44.344350 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:45:59 crc kubenswrapper[4823]: I0121 17:45:59.352528 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:45:59 crc kubenswrapper[4823]: E0121 17:45:59.353529 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:45:59 crc kubenswrapper[4823]: I0121 17:45:59.608221 4823 generic.go:334] "Generic (PLEG): container finished" podID="6acc67e0-e641-4a80-a9e0-e1373d9de675" containerID="6bd9d48893eb245b4b218fffe214590e6f5515f6bf930870b031610f9b0f8a4e" exitCode=0 Jan 21 17:45:59 crc kubenswrapper[4823]: I0121 17:45:59.608317 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" event={"ID":"6acc67e0-e641-4a80-a9e0-e1373d9de675","Type":"ContainerDied","Data":"6bd9d48893eb245b4b218fffe214590e6f5515f6bf930870b031610f9b0f8a4e"} Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.141436 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.282817 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-bootstrap-combined-ca-bundle\") pod \"6acc67e0-e641-4a80-a9e0-e1373d9de675\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.282977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-ssh-key-openstack-edpm-ipam\") pod \"6acc67e0-e641-4a80-a9e0-e1373d9de675\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.283036 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgwc8\" (UniqueName: \"kubernetes.io/projected/6acc67e0-e641-4a80-a9e0-e1373d9de675-kube-api-access-mgwc8\") pod \"6acc67e0-e641-4a80-a9e0-e1373d9de675\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.283104 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-inventory\") pod \"6acc67e0-e641-4a80-a9e0-e1373d9de675\" (UID: \"6acc67e0-e641-4a80-a9e0-e1373d9de675\") " Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.289610 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc67e0-e641-4a80-a9e0-e1373d9de675-kube-api-access-mgwc8" (OuterVolumeSpecName: "kube-api-access-mgwc8") pod "6acc67e0-e641-4a80-a9e0-e1373d9de675" (UID: "6acc67e0-e641-4a80-a9e0-e1373d9de675"). InnerVolumeSpecName "kube-api-access-mgwc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.294210 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6acc67e0-e641-4a80-a9e0-e1373d9de675" (UID: "6acc67e0-e641-4a80-a9e0-e1373d9de675"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.313448 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-inventory" (OuterVolumeSpecName: "inventory") pod "6acc67e0-e641-4a80-a9e0-e1373d9de675" (UID: "6acc67e0-e641-4a80-a9e0-e1373d9de675"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.314715 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6acc67e0-e641-4a80-a9e0-e1373d9de675" (UID: "6acc67e0-e641-4a80-a9e0-e1373d9de675"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.385962 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgwc8\" (UniqueName: \"kubernetes.io/projected/6acc67e0-e641-4a80-a9e0-e1373d9de675-kube-api-access-mgwc8\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.385995 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.386005 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.386015 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6acc67e0-e641-4a80-a9e0-e1373d9de675-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.628750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" event={"ID":"6acc67e0-e641-4a80-a9e0-e1373d9de675","Type":"ContainerDied","Data":"eb6618fb751d6ca0c1043cc7b811806f017f1c503b10a4f4b2fc451312a7f214"} Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.628815 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6618fb751d6ca0c1043cc7b811806f017f1c503b10a4f4b2fc451312a7f214" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.628913 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-48h89" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.772905 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725"] Jan 21 17:46:01 crc kubenswrapper[4823]: E0121 17:46:01.773482 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c50ca9-47c8-493c-9e9b-28dedd84c304" containerName="collect-profiles" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.773506 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c50ca9-47c8-493c-9e9b-28dedd84c304" containerName="collect-profiles" Jan 21 17:46:01 crc kubenswrapper[4823]: E0121 17:46:01.773537 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acc67e0-e641-4a80-a9e0-e1373d9de675" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.773550 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acc67e0-e641-4a80-a9e0-e1373d9de675" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.773888 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acc67e0-e641-4a80-a9e0-e1373d9de675" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.773926 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c50ca9-47c8-493c-9e9b-28dedd84c304" containerName="collect-profiles" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.774996 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.778400 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.780340 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.780741 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.781127 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.784348 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725"] Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.895340 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.895475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfhpq\" (UniqueName: \"kubernetes.io/projected/faa0648b-80a8-4ccf-8295-897de512b670-kube-api-access-rfhpq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.895509 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.996641 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.996802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:01 crc kubenswrapper[4823]: I0121 17:46:01.996949 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfhpq\" (UniqueName: \"kubernetes.io/projected/faa0648b-80a8-4ccf-8295-897de512b670-kube-api-access-rfhpq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.002345 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.003040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.038601 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfhpq\" (UniqueName: \"kubernetes.io/projected/faa0648b-80a8-4ccf-8295-897de512b670-kube-api-access-rfhpq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4x725\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.107496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.340350 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-psmc8"] Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.342280 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.352314 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psmc8"] Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.509132 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8tn\" (UniqueName: \"kubernetes.io/projected/07b05255-f9e6-4514-af55-344e0d3d1b7a-kube-api-access-mq8tn\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.509524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-utilities\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.509720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-catalog-content\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.612148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-catalog-content\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.612302 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq8tn\" (UniqueName: \"kubernetes.io/projected/07b05255-f9e6-4514-af55-344e0d3d1b7a-kube-api-access-mq8tn\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.612328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-utilities\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.612763 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-catalog-content\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.612811 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-utilities\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.633343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq8tn\" (UniqueName: \"kubernetes.io/projected/07b05255-f9e6-4514-af55-344e0d3d1b7a-kube-api-access-mq8tn\") pod \"redhat-operators-psmc8\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.637005 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725"] Jan 21 17:46:02 crc kubenswrapper[4823]: I0121 17:46:02.676779 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.045889 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p7g6z"] Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.056014 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p7g6z"] Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.162277 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psmc8"] Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.372061 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233f4f9e-6a9f-4da5-8698-45692fb68176" path="/var/lib/kubelet/pods/233f4f9e-6a9f-4da5-8698-45692fb68176/volumes" Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.645990 4823 generic.go:334] "Generic (PLEG): container finished" podID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerID="a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba" exitCode=0 Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.646062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerDied","Data":"a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba"} Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.646087 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerStarted","Data":"da6e880535d845ea55b1d8acde222d1d03b5dbdeed98c4d246b8749e1d9c5f25"} Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.647938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" event={"ID":"faa0648b-80a8-4ccf-8295-897de512b670","Type":"ContainerStarted","Data":"280de06301838b406e00b338dd967793afc0622bf2e09409f6eeffafed6b423c"} Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.647990 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" event={"ID":"faa0648b-80a8-4ccf-8295-897de512b670","Type":"ContainerStarted","Data":"bd2d29f28e83d3d67b6a7ccf36559a210ea9a0c0252180cc2f36b824f0f3fc3b"} Jan 21 17:46:03 crc kubenswrapper[4823]: I0121 17:46:03.683024 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" podStartSLOduration=2.29150584 podStartE2EDuration="2.683001414s" podCreationTimestamp="2026-01-21 17:46:01 +0000 UTC" firstStartedPulling="2026-01-21 17:46:02.640254185 +0000 UTC m=+1763.566385045" lastFinishedPulling="2026-01-21 17:46:03.031749739 +0000 UTC m=+1763.957880619" observedRunningTime="2026-01-21 17:46:03.681143708 +0000 UTC m=+1764.607274568" watchObservedRunningTime="2026-01-21 17:46:03.683001414 +0000 UTC m=+1764.609132274" Jan 21 17:46:04 crc kubenswrapper[4823]: I0121 17:46:04.657952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerStarted","Data":"a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237"} Jan 21 17:46:05 crc kubenswrapper[4823]: I0121 17:46:05.670623 4823 generic.go:334] "Generic (PLEG): container finished" podID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerID="a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237" exitCode=0 Jan 21 17:46:05 crc kubenswrapper[4823]: I0121 17:46:05.670742 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerDied","Data":"a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237"} Jan 21 17:46:06 crc kubenswrapper[4823]: I0121 17:46:06.682433 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerStarted","Data":"54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a"} Jan 21 17:46:06 crc kubenswrapper[4823]: I0121 17:46:06.709216 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-psmc8" podStartSLOduration=2.295885099 podStartE2EDuration="4.709195962s" podCreationTimestamp="2026-01-21 17:46:02 +0000 UTC" firstStartedPulling="2026-01-21 17:46:03.647653149 +0000 UTC m=+1764.573784009" lastFinishedPulling="2026-01-21 17:46:06.060964012 +0000 UTC m=+1766.987094872" observedRunningTime="2026-01-21 17:46:06.698719333 +0000 UTC m=+1767.624850193" watchObservedRunningTime="2026-01-21 17:46:06.709195962 +0000 UTC m=+1767.635326822" Jan 21 17:46:08 crc kubenswrapper[4823]: I0121 17:46:08.038426 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x49h2"] Jan 21 17:46:08 crc kubenswrapper[4823]: I0121 17:46:08.052529 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x49h2"] Jan 21 17:46:09 crc kubenswrapper[4823]: I0121 17:46:09.366134 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e6120a-8367-4154-81be-2f80bc318ac2" path="/var/lib/kubelet/pods/00e6120a-8367-4154-81be-2f80bc318ac2/volumes" Jan 21 17:46:12 crc kubenswrapper[4823]: I0121 17:46:12.344777 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:46:12 crc kubenswrapper[4823]: E0121 17:46:12.345474 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:46:12 crc kubenswrapper[4823]: I0121 17:46:12.677478 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:12 crc kubenswrapper[4823]: I0121 17:46:12.677537 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:12 crc kubenswrapper[4823]: I0121 17:46:12.720489 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:12 crc kubenswrapper[4823]: I0121 17:46:12.802694 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:12 crc kubenswrapper[4823]: I0121 17:46:12.964937 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psmc8"] Jan 21 17:46:13 crc kubenswrapper[4823]: I0121 17:46:13.033044 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qk9xv"] Jan 21 17:46:13 crc kubenswrapper[4823]: I0121 17:46:13.044123 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qk9xv"] Jan 21 17:46:13 crc kubenswrapper[4823]: I0121 17:46:13.356911 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8285d29e-4f51-4b0c-8dd9-c613317c933c" path="/var/lib/kubelet/pods/8285d29e-4f51-4b0c-8dd9-c613317c933c/volumes" Jan 21 17:46:14 crc kubenswrapper[4823]: I0121 17:46:14.758329 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-psmc8" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="registry-server" containerID="cri-o://54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a" gracePeriod=2 Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.258331 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.334673 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-utilities\") pod \"07b05255-f9e6-4514-af55-344e0d3d1b7a\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.334769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-catalog-content\") pod \"07b05255-f9e6-4514-af55-344e0d3d1b7a\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.334827 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq8tn\" (UniqueName: \"kubernetes.io/projected/07b05255-f9e6-4514-af55-344e0d3d1b7a-kube-api-access-mq8tn\") pod \"07b05255-f9e6-4514-af55-344e0d3d1b7a\" (UID: \"07b05255-f9e6-4514-af55-344e0d3d1b7a\") " Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.335713 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-utilities" (OuterVolumeSpecName: "utilities") pod "07b05255-f9e6-4514-af55-344e0d3d1b7a" (UID: "07b05255-f9e6-4514-af55-344e0d3d1b7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.340112 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b05255-f9e6-4514-af55-344e0d3d1b7a-kube-api-access-mq8tn" (OuterVolumeSpecName: "kube-api-access-mq8tn") pod "07b05255-f9e6-4514-af55-344e0d3d1b7a" (UID: "07b05255-f9e6-4514-af55-344e0d3d1b7a"). InnerVolumeSpecName "kube-api-access-mq8tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.438507 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq8tn\" (UniqueName: \"kubernetes.io/projected/07b05255-f9e6-4514-af55-344e0d3d1b7a-kube-api-access-mq8tn\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.438777 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.769418 4823 generic.go:334] "Generic (PLEG): container finished" podID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerID="54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a" exitCode=0 Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.769499 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerDied","Data":"54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a"} Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.770061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psmc8" event={"ID":"07b05255-f9e6-4514-af55-344e0d3d1b7a","Type":"ContainerDied","Data":"da6e880535d845ea55b1d8acde222d1d03b5dbdeed98c4d246b8749e1d9c5f25"} Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.770087 4823 scope.go:117] "RemoveContainer" containerID="54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.769557 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psmc8" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.791866 4823 scope.go:117] "RemoveContainer" containerID="a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.817297 4823 scope.go:117] "RemoveContainer" containerID="a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.868159 4823 scope.go:117] "RemoveContainer" containerID="54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a" Jan 21 17:46:15 crc kubenswrapper[4823]: E0121 17:46:15.868771 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a\": container with ID starting with 54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a not found: ID does not exist" containerID="54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.868822 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a"} err="failed to get container status \"54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a\": rpc error: code = NotFound desc = could not find container \"54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a\": container with ID starting with 54f7af7b39beb5d5f5bdbc169eff72bbbaf29c20f5cd2e7ddac68a5ac5a5f86a not found: ID does not exist" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.869041 4823 scope.go:117] "RemoveContainer" containerID="a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237" Jan 21 17:46:15 crc kubenswrapper[4823]: E0121 17:46:15.869398 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237\": container with ID starting with a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237 not found: ID does not exist" containerID="a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.869432 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237"} err="failed to get container status \"a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237\": rpc error: code = NotFound desc = could not find container \"a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237\": container with ID starting with a6be426c587e4c45c8b77dafa9f367e75f9bd070e3c19e4ad768e9b11b446237 not found: ID does not exist" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.869452 4823 scope.go:117] "RemoveContainer" containerID="a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba" Jan 21 17:46:15 crc kubenswrapper[4823]: E0121 17:46:15.869721 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba\": container with ID starting with a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba not found: ID does not exist" containerID="a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba" Jan 21 17:46:15 crc kubenswrapper[4823]: I0121 17:46:15.869817 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba"} err="failed to get container status \"a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba\": rpc error: code = NotFound desc = could not find container \"a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba\": container with ID starting with a112bd9050ffe6dbbe5063ed469433a315a30a8d4d7509675015933b5eaab0ba not found: ID does not exist" Jan 21 17:46:16 crc kubenswrapper[4823]: I0121 17:46:16.248200 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07b05255-f9e6-4514-af55-344e0d3d1b7a" (UID: "07b05255-f9e6-4514-af55-344e0d3d1b7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:46:16 crc kubenswrapper[4823]: I0121 17:46:16.264192 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b05255-f9e6-4514-af55-344e0d3d1b7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:46:16 crc kubenswrapper[4823]: I0121 17:46:16.403387 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psmc8"] Jan 21 17:46:16 crc kubenswrapper[4823]: I0121 17:46:16.411233 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-psmc8"] Jan 21 17:46:17 crc kubenswrapper[4823]: I0121 17:46:17.355885 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" path="/var/lib/kubelet/pods/07b05255-f9e6-4514-af55-344e0d3d1b7a/volumes" Jan 21 17:46:18 crc kubenswrapper[4823]: I0121 17:46:18.349011 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-zf8s8" podUID="8d053092-d968-421e-8413-7366fb2d5350" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 17:46:24 crc kubenswrapper[4823]: I0121 17:46:24.343846 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:46:24 crc kubenswrapper[4823]: E0121 17:46:24.344651 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:46:25 crc kubenswrapper[4823]: I0121 17:46:25.046441 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-jd72h"] Jan 21 17:46:25 crc kubenswrapper[4823]: I0121 17:46:25.056927 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-jd72h"] Jan 21 17:46:25 crc kubenswrapper[4823]: I0121 17:46:25.361503 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d800059-c35e-403a-a930-f1db60cf5c75" path="/var/lib/kubelet/pods/0d800059-c35e-403a-a930-f1db60cf5c75/volumes" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.516173 4823 scope.go:117] "RemoveContainer" containerID="28cec83f764dd22d097ae9ca0aa9d95e707fc99c293ba4ce05d697fc6ba63b15" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.569383 4823 scope.go:117] "RemoveContainer" containerID="d21aa915f20d7bd15a8b3ca028a73aa53ded7b8f9ed8b2db5bf48955e3f9c65a" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.601472 4823 scope.go:117] "RemoveContainer" containerID="654e65407bc10457cd615d1fea8a4a841d47c22461612042693420741fbc9978" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.659717 4823 scope.go:117] "RemoveContainer" containerID="7abe6528fff9918ec08a56a0eb3697b2c968668f8e65e52a776f071bad138ff6" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.708588 4823 scope.go:117] "RemoveContainer" containerID="6382a311af50e055cb090b756e3b5bdb972b7be0beb02fe03d50b1861c0b8517" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.762079 4823 scope.go:117] "RemoveContainer" containerID="d43b9c5e7f373fe44cbd3f153ff197fc5273f0cc3a88c46e7c853827455b77a2" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.798009 4823 scope.go:117] "RemoveContainer" containerID="140da72e9910ec786c5983426796e28e700ffc14c372085b180c78919159b4e3" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.825253 4823 scope.go:117] "RemoveContainer" containerID="a308db27616df6948b4640fadcdd3d0792a51fe176ce6e60abd6f38b2f6c646a" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.847056 4823 scope.go:117] "RemoveContainer" containerID="220057f3df1fdb09592a246474dbe37f53716ffa21a02eff738acac1cdaf208d" Jan 21 17:46:27 crc kubenswrapper[4823]: I0121 17:46:27.871521 4823 scope.go:117] "RemoveContainer" containerID="1bd342e50081eaa1f183f6425035fd9f365aba092f24ca18e9b16e5525e0f073" Jan 21 17:46:37 crc kubenswrapper[4823]: I0121 17:46:37.343969 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:46:37 crc kubenswrapper[4823]: E0121 17:46:37.344764 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:46:51 crc kubenswrapper[4823]: I0121 17:46:51.346169 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:46:51 crc kubenswrapper[4823]: E0121 17:46:51.346992 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:46:53 crc kubenswrapper[4823]: I0121 17:46:53.037714 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6l2l8"] Jan 21 17:46:53 crc kubenswrapper[4823]: I0121 17:46:53.049304 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6l2l8"] Jan 21 17:46:53 crc kubenswrapper[4823]: I0121 17:46:53.354870 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b459c03-94c5-43e1-bda8-b7e174f3830c" path="/var/lib/kubelet/pods/4b459c03-94c5-43e1-bda8-b7e174f3830c/volumes" Jan 21 17:47:00 crc kubenswrapper[4823]: I0121 17:47:00.044733 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j5shb"] Jan 21 17:47:00 crc kubenswrapper[4823]: I0121 17:47:00.056563 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j5shb"] Jan 21 17:47:01 crc kubenswrapper[4823]: I0121 17:47:01.359530 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8b53cb-9154-460c-90ff-72c9987cd31c" path="/var/lib/kubelet/pods/7b8b53cb-9154-460c-90ff-72c9987cd31c/volumes" Jan 21 17:47:02 crc kubenswrapper[4823]: I0121 17:47:02.345677 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:47:02 crc kubenswrapper[4823]: E0121 17:47:02.346341 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:47:15 crc kubenswrapper[4823]: I0121 17:47:15.343646 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:47:16 crc kubenswrapper[4823]: I0121 17:47:16.478479 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"6730de56359999d27e136e7b8e170ebada5c65c8e7dc82dfa41107b0b851dfa2"} Jan 21 17:47:28 crc kubenswrapper[4823]: I0121 17:47:28.079532 4823 scope.go:117] "RemoveContainer" containerID="bd1f7a4d9d083eadccae0b7d71781fcdfd8b0069dcfee4ccf299729784a40c95" Jan 21 17:47:28 crc kubenswrapper[4823]: I0121 17:47:28.128542 4823 scope.go:117] "RemoveContainer" containerID="96ad8c386975f7c37f9f67d9d8d2cb614ed8ee0e1c8b48791d39efc14e843b92" Jan 21 17:47:57 crc kubenswrapper[4823]: I0121 17:47:57.055626 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qnhv5"] Jan 21 17:47:57 crc kubenswrapper[4823]: I0121 17:47:57.067728 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vh8wn"] Jan 21 17:47:57 crc kubenswrapper[4823]: I0121 17:47:57.086194 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qnhv5"] Jan 21 17:47:57 crc kubenswrapper[4823]: I0121 17:47:57.096504 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vh8wn"] Jan 21 17:47:57 crc kubenswrapper[4823]: I0121 17:47:57.358440 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7924f2b-6db5-4473-ae49-91c0d32fa817" path="/var/lib/kubelet/pods/a7924f2b-6db5-4473-ae49-91c0d32fa817/volumes" Jan 21 17:47:57 crc kubenswrapper[4823]: I0121 17:47:57.359896 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29754ad-e324-474f-a0df-d450b9152aa3" path="/var/lib/kubelet/pods/c29754ad-e324-474f-a0df-d450b9152aa3/volumes" Jan 21 17:48:04 crc kubenswrapper[4823]: I0121 17:48:04.038161 4823 generic.go:334] "Generic (PLEG): container finished" podID="faa0648b-80a8-4ccf-8295-897de512b670" containerID="280de06301838b406e00b338dd967793afc0622bf2e09409f6eeffafed6b423c" exitCode=0 Jan 21 17:48:04 crc kubenswrapper[4823]: I0121 17:48:04.038893 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" event={"ID":"faa0648b-80a8-4ccf-8295-897de512b670","Type":"ContainerDied","Data":"280de06301838b406e00b338dd967793afc0622bf2e09409f6eeffafed6b423c"} Jan 21 17:48:04 crc kubenswrapper[4823]: I0121 17:48:04.039349 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fr4qp"] Jan 21 17:48:04 crc kubenswrapper[4823]: I0121 17:48:04.059337 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fr4qp"] Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.357421 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d118987-76ea-46aa-9989-274e87e36d3a" path="/var/lib/kubelet/pods/2d118987-76ea-46aa-9989-274e87e36d3a/volumes" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.567568 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.633158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfhpq\" (UniqueName: \"kubernetes.io/projected/faa0648b-80a8-4ccf-8295-897de512b670-kube-api-access-rfhpq\") pod \"faa0648b-80a8-4ccf-8295-897de512b670\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.633316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-inventory\") pod \"faa0648b-80a8-4ccf-8295-897de512b670\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.633453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-ssh-key-openstack-edpm-ipam\") pod \"faa0648b-80a8-4ccf-8295-897de512b670\" (UID: \"faa0648b-80a8-4ccf-8295-897de512b670\") " Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.638179 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa0648b-80a8-4ccf-8295-897de512b670-kube-api-access-rfhpq" (OuterVolumeSpecName: "kube-api-access-rfhpq") pod "faa0648b-80a8-4ccf-8295-897de512b670" (UID: "faa0648b-80a8-4ccf-8295-897de512b670"). InnerVolumeSpecName "kube-api-access-rfhpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.664764 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-inventory" (OuterVolumeSpecName: "inventory") pod "faa0648b-80a8-4ccf-8295-897de512b670" (UID: "faa0648b-80a8-4ccf-8295-897de512b670"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.673613 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "faa0648b-80a8-4ccf-8295-897de512b670" (UID: "faa0648b-80a8-4ccf-8295-897de512b670"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.736830 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.736897 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa0648b-80a8-4ccf-8295-897de512b670-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:48:05 crc kubenswrapper[4823]: I0121 17:48:05.736912 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfhpq\" (UniqueName: \"kubernetes.io/projected/faa0648b-80a8-4ccf-8295-897de512b670-kube-api-access-rfhpq\") on node \"crc\" DevicePath \"\"" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.063401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" event={"ID":"faa0648b-80a8-4ccf-8295-897de512b670","Type":"ContainerDied","Data":"bd2d29f28e83d3d67b6a7ccf36559a210ea9a0c0252180cc2f36b824f0f3fc3b"} Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.063461 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd2d29f28e83d3d67b6a7ccf36559a210ea9a0c0252180cc2f36b824f0f3fc3b" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.063598 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4x725" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.156185 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf"] Jan 21 17:48:06 crc kubenswrapper[4823]: E0121 17:48:06.156972 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="extract-utilities" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.156999 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="extract-utilities" Jan 21 17:48:06 crc kubenswrapper[4823]: E0121 17:48:06.157019 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa0648b-80a8-4ccf-8295-897de512b670" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.157031 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa0648b-80a8-4ccf-8295-897de512b670" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 17:48:06 crc kubenswrapper[4823]: E0121 17:48:06.157058 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="registry-server" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.157067 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="registry-server" Jan 21 17:48:06 crc kubenswrapper[4823]: E0121 17:48:06.157116 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="extract-content" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.157124 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="extract-content" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.157395 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b05255-f9e6-4514-af55-344e0d3d1b7a" containerName="registry-server" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.157419 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa0648b-80a8-4ccf-8295-897de512b670" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.158635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.161938 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.161993 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.167311 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.169093 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf"] Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.173389 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.246876 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf5h\" (UniqueName: \"kubernetes.io/projected/9dc1e78d-7f63-4a5e-bade-1f72df039863-kube-api-access-phf5h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.246928 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.246967 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.348909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phf5h\" (UniqueName: \"kubernetes.io/projected/9dc1e78d-7f63-4a5e-bade-1f72df039863-kube-api-access-phf5h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.349671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.349746 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.354801 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.354841 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.366262 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phf5h\" (UniqueName: \"kubernetes.io/projected/9dc1e78d-7f63-4a5e-bade-1f72df039863-kube-api-access-phf5h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:06 crc kubenswrapper[4823]: I0121 17:48:06.542631 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:48:07 crc kubenswrapper[4823]: I0121 17:48:07.127935 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf"] Jan 21 17:48:07 crc kubenswrapper[4823]: I0121 17:48:07.141665 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:48:08 crc kubenswrapper[4823]: I0121 17:48:08.084365 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" event={"ID":"9dc1e78d-7f63-4a5e-bade-1f72df039863","Type":"ContainerStarted","Data":"2bc30f92e6ecee061a5e00a370c21d057ef44d225a7a0562eebc5efd4fad3cdc"} Jan 21 17:48:08 crc kubenswrapper[4823]: I0121 17:48:08.084666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" event={"ID":"9dc1e78d-7f63-4a5e-bade-1f72df039863","Type":"ContainerStarted","Data":"1fad1de4caa556eb75203cedd8ceff2385be4d688943e6eb63ccfbadff72ff80"} Jan 21 17:48:08 crc kubenswrapper[4823]: I0121 17:48:08.107321 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" podStartSLOduration=1.7052727170000002 podStartE2EDuration="2.107302862s" podCreationTimestamp="2026-01-21 17:48:06 +0000 UTC" firstStartedPulling="2026-01-21 17:48:07.14145328 +0000 UTC m=+1888.067584140" lastFinishedPulling="2026-01-21 17:48:07.543483425 +0000 UTC m=+1888.469614285" observedRunningTime="2026-01-21 17:48:08.097300385 +0000 UTC m=+1889.023431245" watchObservedRunningTime="2026-01-21 17:48:08.107302862 +0000 UTC m=+1889.033433722" Jan 21 17:48:26 crc kubenswrapper[4823]: I0121 17:48:26.047304 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kbsvz"] Jan 21 17:48:26 crc kubenswrapper[4823]: I0121 17:48:26.056001 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vsz94"] Jan 21 17:48:26 crc kubenswrapper[4823]: I0121 17:48:26.066246 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2a23-account-create-update-vbrvj"] Jan 21 17:48:26 crc kubenswrapper[4823]: I0121 17:48:26.074027 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2a23-account-create-update-vbrvj"] Jan 21 17:48:26 crc kubenswrapper[4823]: I0121 17:48:26.082485 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vsz94"] Jan 21 17:48:26 crc kubenswrapper[4823]: I0121 17:48:26.089955 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kbsvz"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.039288 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3c8d-account-create-update-2cn9n"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.054801 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vxptb"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.065692 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d17a-account-create-update-pvshz"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.072699 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3c8d-account-create-update-2cn9n"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.079467 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vxptb"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.086429 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d17a-account-create-update-pvshz"] Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.354522 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373810fa-4b13-4036-99a4-3b1f4d02c0cf" path="/var/lib/kubelet/pods/373810fa-4b13-4036-99a4-3b1f4d02c0cf/volumes" Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.355348 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f9b916-5570-4624-822e-587591152bfe" path="/var/lib/kubelet/pods/45f9b916-5570-4624-822e-587591152bfe/volumes" Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.356408 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72025c1d-b829-47a6-90c0-9be0c98110cb" path="/var/lib/kubelet/pods/72025c1d-b829-47a6-90c0-9be0c98110cb/volumes" Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.357336 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad35032d-b4a7-40e6-b249-ed6fda0b6917" path="/var/lib/kubelet/pods/ad35032d-b4a7-40e6-b249-ed6fda0b6917/volumes" Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.358900 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e444ec-f927-4a23-8437-3e3b06ab3498" path="/var/lib/kubelet/pods/d5e444ec-f927-4a23-8437-3e3b06ab3498/volumes" Jan 21 17:48:27 crc kubenswrapper[4823]: I0121 17:48:27.359898 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1376bd-dda3-4f8a-93df-1c699366f12e" path="/var/lib/kubelet/pods/fb1376bd-dda3-4f8a-93df-1c699366f12e/volumes" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.247325 4823 scope.go:117] "RemoveContainer" containerID="3efd16b176b2fddd96dca6cd196eaf1d81ce9a51d5457d2fafc70810f4df78ba" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.274355 4823 scope.go:117] "RemoveContainer" containerID="30187260f0d5086dcd5d88ffd23cd70132d2d71e5d1a98e711b487ce9a9f655b" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.341182 4823 scope.go:117] "RemoveContainer" containerID="fd63d187867d12a93605a1f217ca160c7f7ac70b145f5fd1fa052e27f01c12c9" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.388217 4823 scope.go:117] "RemoveContainer" containerID="a5f00f2d0772f25423f1ff5c448058598935e70ca5fd79990f5f0c9c91e899a7" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.421349 4823 scope.go:117] "RemoveContainer" containerID="4390191b774bfd675bc80dd6323a80f156ddd947e8ae2e7bead435e1381c490b" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.463166 4823 scope.go:117] "RemoveContainer" containerID="0e25ba130262bf3981890d0157534915dbdd7f5f6d73bf4de6aa3f47178c655f" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.516404 4823 scope.go:117] "RemoveContainer" containerID="ffeb49fadcdb837d3003df585c8066b01b288503d22f905488030007e7abce13" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.540866 4823 scope.go:117] "RemoveContainer" containerID="7450e92d8570cca61abdb5f2fdf552a81b8e93450b302712d25c883f417f62db" Jan 21 17:48:28 crc kubenswrapper[4823]: I0121 17:48:28.567909 4823 scope.go:117] "RemoveContainer" containerID="b2512209aae2be45fddeca499b2a6cb4591a4e2ca2f1dfcf97d1dd7f68779ad7" Jan 21 17:49:00 crc kubenswrapper[4823]: I0121 17:49:00.043465 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qs8v9"] Jan 21 17:49:00 crc kubenswrapper[4823]: I0121 17:49:00.052878 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qs8v9"] Jan 21 17:49:01 crc kubenswrapper[4823]: I0121 17:49:01.355392 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223be500-ec7c-4380-9001-8f80d0f799f2" path="/var/lib/kubelet/pods/223be500-ec7c-4380-9001-8f80d0f799f2/volumes" Jan 21 17:49:15 crc kubenswrapper[4823]: I0121 17:49:15.070563 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:49:15 crc kubenswrapper[4823]: I0121 17:49:15.071119 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:49:22 crc kubenswrapper[4823]: I0121 17:49:22.858832 4823 generic.go:334] "Generic (PLEG): container finished" podID="9dc1e78d-7f63-4a5e-bade-1f72df039863" containerID="2bc30f92e6ecee061a5e00a370c21d057ef44d225a7a0562eebc5efd4fad3cdc" exitCode=0 Jan 21 17:49:22 crc kubenswrapper[4823]: I0121 17:49:22.858960 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" event={"ID":"9dc1e78d-7f63-4a5e-bade-1f72df039863","Type":"ContainerDied","Data":"2bc30f92e6ecee061a5e00a370c21d057ef44d225a7a0562eebc5efd4fad3cdc"} Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.368322 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.501254 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-ssh-key-openstack-edpm-ipam\") pod \"9dc1e78d-7f63-4a5e-bade-1f72df039863\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.501797 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phf5h\" (UniqueName: \"kubernetes.io/projected/9dc1e78d-7f63-4a5e-bade-1f72df039863-kube-api-access-phf5h\") pod \"9dc1e78d-7f63-4a5e-bade-1f72df039863\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.501975 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-inventory\") pod \"9dc1e78d-7f63-4a5e-bade-1f72df039863\" (UID: \"9dc1e78d-7f63-4a5e-bade-1f72df039863\") " Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.510673 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc1e78d-7f63-4a5e-bade-1f72df039863-kube-api-access-phf5h" (OuterVolumeSpecName: "kube-api-access-phf5h") pod "9dc1e78d-7f63-4a5e-bade-1f72df039863" (UID: "9dc1e78d-7f63-4a5e-bade-1f72df039863"). InnerVolumeSpecName "kube-api-access-phf5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.550062 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-inventory" (OuterVolumeSpecName: "inventory") pod "9dc1e78d-7f63-4a5e-bade-1f72df039863" (UID: "9dc1e78d-7f63-4a5e-bade-1f72df039863"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.585545 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9dc1e78d-7f63-4a5e-bade-1f72df039863" (UID: "9dc1e78d-7f63-4a5e-bade-1f72df039863"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.606074 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phf5h\" (UniqueName: \"kubernetes.io/projected/9dc1e78d-7f63-4a5e-bade-1f72df039863-kube-api-access-phf5h\") on node \"crc\" DevicePath \"\"" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.606112 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.606123 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dc1e78d-7f63-4a5e-bade-1f72df039863-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.878071 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" event={"ID":"9dc1e78d-7f63-4a5e-bade-1f72df039863","Type":"ContainerDied","Data":"1fad1de4caa556eb75203cedd8ceff2385be4d688943e6eb63ccfbadff72ff80"} Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.878127 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fad1de4caa556eb75203cedd8ceff2385be4d688943e6eb63ccfbadff72ff80" Jan 21 17:49:24 crc kubenswrapper[4823]: I0121 17:49:24.878142 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.004434 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps"] Jan 21 17:49:25 crc kubenswrapper[4823]: E0121 17:49:25.004824 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc1e78d-7f63-4a5e-bade-1f72df039863" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.004846 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc1e78d-7f63-4a5e-bade-1f72df039863" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.005050 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc1e78d-7f63-4a5e-bade-1f72df039863" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.006484 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.009643 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.009990 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.010161 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.010266 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.011979 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps"] Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.116213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.116774 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.116936 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnf44\" (UniqueName: \"kubernetes.io/projected/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-kube-api-access-vnf44\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.219297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.219483 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.219632 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnf44\" (UniqueName: \"kubernetes.io/projected/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-kube-api-access-vnf44\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.223785 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.225077 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.238142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnf44\" (UniqueName: \"kubernetes.io/projected/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-kube-api-access-vnf44\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4mps\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.350770 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:25 crc kubenswrapper[4823]: I0121 17:49:25.933487 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps"] Jan 21 17:49:26 crc kubenswrapper[4823]: I0121 17:49:26.903919 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" event={"ID":"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a","Type":"ContainerStarted","Data":"97dd7b1415dfe772ea1a5101f13291ae9b26acf9232bab820d63c1a5f7f65aa4"} Jan 21 17:49:26 crc kubenswrapper[4823]: I0121 17:49:26.904476 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" event={"ID":"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a","Type":"ContainerStarted","Data":"189baa5e6364e50feade383664f19299e814faefc422dfe8199f2f2e2b183086"} Jan 21 17:49:26 crc kubenswrapper[4823]: I0121 17:49:26.932557 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" podStartSLOduration=2.43249932 podStartE2EDuration="2.932539943s" podCreationTimestamp="2026-01-21 17:49:24 +0000 UTC" firstStartedPulling="2026-01-21 17:49:25.936314892 +0000 UTC m=+1966.862445752" lastFinishedPulling="2026-01-21 17:49:26.436355515 +0000 UTC m=+1967.362486375" observedRunningTime="2026-01-21 17:49:26.924634118 +0000 UTC m=+1967.850764978" watchObservedRunningTime="2026-01-21 17:49:26.932539943 +0000 UTC m=+1967.858670803" Jan 21 17:49:28 crc kubenswrapper[4823]: I0121 17:49:28.732678 4823 scope.go:117] "RemoveContainer" containerID="802c97b99c54a1d566f9b886c15ff1806c668f1ee2cb9c2a22b1305cbf79980c" Jan 21 17:49:31 crc kubenswrapper[4823]: I0121 17:49:31.957717 4823 generic.go:334] "Generic (PLEG): container finished" podID="354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" containerID="97dd7b1415dfe772ea1a5101f13291ae9b26acf9232bab820d63c1a5f7f65aa4" exitCode=0 Jan 21 17:49:31 crc kubenswrapper[4823]: I0121 17:49:31.957796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" event={"ID":"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a","Type":"ContainerDied","Data":"97dd7b1415dfe772ea1a5101f13291ae9b26acf9232bab820d63c1a5f7f65aa4"} Jan 21 17:49:32 crc kubenswrapper[4823]: I0121 17:49:32.047322 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c4mnt"] Jan 21 17:49:32 crc kubenswrapper[4823]: I0121 17:49:32.057540 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdxxs"] Jan 21 17:49:32 crc kubenswrapper[4823]: I0121 17:49:32.067774 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdxxs"] Jan 21 17:49:32 crc kubenswrapper[4823]: I0121 17:49:32.076497 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c4mnt"] Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.354750 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d463e08-e2af-4555-825a-b3913bf13d03" path="/var/lib/kubelet/pods/2d463e08-e2af-4555-825a-b3913bf13d03/volumes" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.356054 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b75c81-1f2c-4b8f-a95a-98ba021bb41f" path="/var/lib/kubelet/pods/e9b75c81-1f2c-4b8f-a95a-98ba021bb41f/volumes" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.426627 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.595024 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-inventory\") pod \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.595077 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-ssh-key-openstack-edpm-ipam\") pod \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.595169 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnf44\" (UniqueName: \"kubernetes.io/projected/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-kube-api-access-vnf44\") pod \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\" (UID: \"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a\") " Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.603148 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-kube-api-access-vnf44" (OuterVolumeSpecName: "kube-api-access-vnf44") pod "354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" (UID: "354d4cde-c7c6-4b49-9ec0-11c551fc6a7a"). InnerVolumeSpecName "kube-api-access-vnf44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.622678 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-inventory" (OuterVolumeSpecName: "inventory") pod "354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" (UID: "354d4cde-c7c6-4b49-9ec0-11c551fc6a7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.623117 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" (UID: "354d4cde-c7c6-4b49-9ec0-11c551fc6a7a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.697762 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.697808 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.697824 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnf44\" (UniqueName: \"kubernetes.io/projected/354d4cde-c7c6-4b49-9ec0-11c551fc6a7a-kube-api-access-vnf44\") on node \"crc\" DevicePath \"\"" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.982582 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" event={"ID":"354d4cde-c7c6-4b49-9ec0-11c551fc6a7a","Type":"ContainerDied","Data":"189baa5e6364e50feade383664f19299e814faefc422dfe8199f2f2e2b183086"} Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.982652 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189baa5e6364e50feade383664f19299e814faefc422dfe8199f2f2e2b183086" Jan 21 17:49:33 crc kubenswrapper[4823]: I0121 17:49:33.982681 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4mps" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.217224 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9"] Jan 21 17:49:34 crc kubenswrapper[4823]: E0121 17:49:34.217793 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.217817 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.218150 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="354d4cde-c7c6-4b49-9ec0-11c551fc6a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.219100 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.224040 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.224455 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.233402 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9"] Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.236004 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.238636 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.311990 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.312581 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.312637 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4q8j\" (UniqueName: \"kubernetes.io/projected/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-kube-api-access-s4q8j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.414085 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4q8j\" (UniqueName: \"kubernetes.io/projected/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-kube-api-access-s4q8j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.414184 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.414291 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.419347 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.427457 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.431894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4q8j\" (UniqueName: \"kubernetes.io/projected/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-kube-api-access-s4q8j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9c6q9\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:34 crc kubenswrapper[4823]: I0121 17:49:34.549770 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:49:35 crc kubenswrapper[4823]: I0121 17:49:35.124437 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9"] Jan 21 17:49:35 crc kubenswrapper[4823]: W0121 17:49:35.152583 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0725d92_cd2b_4258_b7dc_3c76b8f75eb0.slice/crio-c22545561e0679318311695118fa8707c1a66da1aeb17351b2871e9489f6c072 WatchSource:0}: Error finding container c22545561e0679318311695118fa8707c1a66da1aeb17351b2871e9489f6c072: Status 404 returned error can't find the container with id c22545561e0679318311695118fa8707c1a66da1aeb17351b2871e9489f6c072 Jan 21 17:49:36 crc kubenswrapper[4823]: I0121 17:49:36.005461 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" event={"ID":"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0","Type":"ContainerStarted","Data":"c22545561e0679318311695118fa8707c1a66da1aeb17351b2871e9489f6c072"} Jan 21 17:49:37 crc kubenswrapper[4823]: I0121 17:49:37.018054 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" event={"ID":"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0","Type":"ContainerStarted","Data":"1c54a529c3963d59d42bdce4def643de0b75964f6de3fd95647e9edfdb2edc49"} Jan 21 17:49:37 crc kubenswrapper[4823]: I0121 17:49:37.039797 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" podStartSLOduration=2.237500355 podStartE2EDuration="3.039779653s" podCreationTimestamp="2026-01-21 17:49:34 +0000 UTC" firstStartedPulling="2026-01-21 17:49:35.155054387 +0000 UTC m=+1976.081185247" lastFinishedPulling="2026-01-21 17:49:35.957333685 +0000 UTC m=+1976.883464545" observedRunningTime="2026-01-21 17:49:37.031686883 +0000 UTC m=+1977.957817743" watchObservedRunningTime="2026-01-21 17:49:37.039779653 +0000 UTC m=+1977.965910513" Jan 21 17:49:45 crc kubenswrapper[4823]: I0121 17:49:45.070416 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:49:45 crc kubenswrapper[4823]: I0121 17:49:45.071241 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.070751 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.071693 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.071771 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.072930 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6730de56359999d27e136e7b8e170ebada5c65c8e7dc82dfa41107b0b851dfa2"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.073032 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://6730de56359999d27e136e7b8e170ebada5c65c8e7dc82dfa41107b0b851dfa2" gracePeriod=600 Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.452070 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="6730de56359999d27e136e7b8e170ebada5c65c8e7dc82dfa41107b0b851dfa2" exitCode=0 Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.452519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"6730de56359999d27e136e7b8e170ebada5c65c8e7dc82dfa41107b0b851dfa2"} Jan 21 17:50:15 crc kubenswrapper[4823]: I0121 17:50:15.452714 4823 scope.go:117] "RemoveContainer" containerID="3dfa27cab46d660d19374569c37440b850bf4b2a95f20bec4e6cde80500e35eb" Jan 21 17:50:16 crc kubenswrapper[4823]: I0121 17:50:16.462823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce"} Jan 21 17:50:17 crc kubenswrapper[4823]: I0121 17:50:17.477223 4823 generic.go:334] "Generic (PLEG): container finished" podID="a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" containerID="1c54a529c3963d59d42bdce4def643de0b75964f6de3fd95647e9edfdb2edc49" exitCode=0 Jan 21 17:50:17 crc kubenswrapper[4823]: I0121 17:50:17.477322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" event={"ID":"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0","Type":"ContainerDied","Data":"1c54a529c3963d59d42bdce4def643de0b75964f6de3fd95647e9edfdb2edc49"} Jan 21 17:50:18 crc kubenswrapper[4823]: I0121 17:50:18.056216 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-khj8v"] Jan 21 17:50:18 crc kubenswrapper[4823]: I0121 17:50:18.067166 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-khj8v"] Jan 21 17:50:18 crc kubenswrapper[4823]: I0121 17:50:18.942536 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.001720 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-ssh-key-openstack-edpm-ipam\") pod \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.001839 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4q8j\" (UniqueName: \"kubernetes.io/projected/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-kube-api-access-s4q8j\") pod \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.002090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-inventory\") pod \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\" (UID: \"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0\") " Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.007954 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-kube-api-access-s4q8j" (OuterVolumeSpecName: "kube-api-access-s4q8j") pod "a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" (UID: "a0725d92-cd2b-4258-b7dc-3c76b8f75eb0"). InnerVolumeSpecName "kube-api-access-s4q8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.029528 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" (UID: "a0725d92-cd2b-4258-b7dc-3c76b8f75eb0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.043338 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-inventory" (OuterVolumeSpecName: "inventory") pod "a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" (UID: "a0725d92-cd2b-4258-b7dc-3c76b8f75eb0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.104663 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4q8j\" (UniqueName: \"kubernetes.io/projected/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-kube-api-access-s4q8j\") on node \"crc\" DevicePath \"\"" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.104728 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.104744 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0725d92-cd2b-4258-b7dc-3c76b8f75eb0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.363799 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3843c17-8569-45a9-af71-95e31515609b" path="/var/lib/kubelet/pods/b3843c17-8569-45a9-af71-95e31515609b/volumes" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.500715 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" event={"ID":"a0725d92-cd2b-4258-b7dc-3c76b8f75eb0","Type":"ContainerDied","Data":"c22545561e0679318311695118fa8707c1a66da1aeb17351b2871e9489f6c072"} Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.501027 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c22545561e0679318311695118fa8707c1a66da1aeb17351b2871e9489f6c072" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.501106 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9c6q9" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.708394 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk"] Jan 21 17:50:19 crc kubenswrapper[4823]: E0121 17:50:19.708935 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.708959 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.709217 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0725d92-cd2b-4258-b7dc-3c76b8f75eb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.710143 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.713071 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.713991 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.714415 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.714925 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.732044 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk"] Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.819929 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.820023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7hm\" (UniqueName: \"kubernetes.io/projected/7db429cb-d9dd-4122-b81b-239b40952922-kube-api-access-mq7hm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.820138 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.922147 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.922291 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.922359 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7hm\" (UniqueName: \"kubernetes.io/projected/7db429cb-d9dd-4122-b81b-239b40952922-kube-api-access-mq7hm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.928053 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.930503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:19 crc kubenswrapper[4823]: I0121 17:50:19.941500 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7hm\" (UniqueName: \"kubernetes.io/projected/7db429cb-d9dd-4122-b81b-239b40952922-kube-api-access-mq7hm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:20 crc kubenswrapper[4823]: I0121 17:50:20.032293 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:50:20 crc kubenswrapper[4823]: I0121 17:50:20.559681 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk"] Jan 21 17:50:20 crc kubenswrapper[4823]: W0121 17:50:20.565430 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db429cb_d9dd_4122_b81b_239b40952922.slice/crio-a8efbb36c4beeabd6d1999dd997672950ce15db4c0c68279b0801190f07e6029 WatchSource:0}: Error finding container a8efbb36c4beeabd6d1999dd997672950ce15db4c0c68279b0801190f07e6029: Status 404 returned error can't find the container with id a8efbb36c4beeabd6d1999dd997672950ce15db4c0c68279b0801190f07e6029 Jan 21 17:50:21 crc kubenswrapper[4823]: I0121 17:50:21.522890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" event={"ID":"7db429cb-d9dd-4122-b81b-239b40952922","Type":"ContainerStarted","Data":"fcb5115fb8860012d6928529bb1bac716f0e2a727399ad25ae3dbb39408b601c"} Jan 21 17:50:21 crc kubenswrapper[4823]: I0121 17:50:21.523170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" event={"ID":"7db429cb-d9dd-4122-b81b-239b40952922","Type":"ContainerStarted","Data":"a8efbb36c4beeabd6d1999dd997672950ce15db4c0c68279b0801190f07e6029"} Jan 21 17:50:21 crc kubenswrapper[4823]: I0121 17:50:21.543411 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" podStartSLOduration=2.036070158 podStartE2EDuration="2.54338554s" podCreationTimestamp="2026-01-21 17:50:19 +0000 UTC" firstStartedPulling="2026-01-21 17:50:20.568810173 +0000 UTC m=+2021.494941033" lastFinishedPulling="2026-01-21 17:50:21.076125535 +0000 UTC m=+2022.002256415" observedRunningTime="2026-01-21 17:50:21.537485975 +0000 UTC m=+2022.463616845" watchObservedRunningTime="2026-01-21 17:50:21.54338554 +0000 UTC m=+2022.469516400" Jan 21 17:50:28 crc kubenswrapper[4823]: I0121 17:50:28.853416 4823 scope.go:117] "RemoveContainer" containerID="97440ed9364111db74d0dc90f42bf355673a068f500302ea526471d224b6aacc" Jan 21 17:50:28 crc kubenswrapper[4823]: I0121 17:50:28.883949 4823 scope.go:117] "RemoveContainer" containerID="3e9311278f3e286f8c34db74363d66e17e79c70aeffe845de02c3060993331f8" Jan 21 17:50:28 crc kubenswrapper[4823]: I0121 17:50:28.947639 4823 scope.go:117] "RemoveContainer" containerID="47ffdff1507a2f04120a0a573cd9bb4aa164a79a34c73a08ab0db3de8dc8cbd0" Jan 21 17:52:15 crc kubenswrapper[4823]: I0121 17:52:15.070225 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:52:15 crc kubenswrapper[4823]: I0121 17:52:15.070717 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:52:45 crc kubenswrapper[4823]: I0121 17:52:45.070430 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:52:45 crc kubenswrapper[4823]: I0121 17:52:45.071023 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:52:46 crc kubenswrapper[4823]: E0121 17:52:46.460110 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db429cb_d9dd_4122_b81b_239b40952922.slice/crio-fcb5115fb8860012d6928529bb1bac716f0e2a727399ad25ae3dbb39408b601c.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:52:46 crc kubenswrapper[4823]: I0121 17:52:46.938192 4823 generic.go:334] "Generic (PLEG): container finished" podID="7db429cb-d9dd-4122-b81b-239b40952922" containerID="fcb5115fb8860012d6928529bb1bac716f0e2a727399ad25ae3dbb39408b601c" exitCode=0 Jan 21 17:52:46 crc kubenswrapper[4823]: I0121 17:52:46.938249 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" event={"ID":"7db429cb-d9dd-4122-b81b-239b40952922","Type":"ContainerDied","Data":"fcb5115fb8860012d6928529bb1bac716f0e2a727399ad25ae3dbb39408b601c"} Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.350619 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.494594 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-inventory\") pod \"7db429cb-d9dd-4122-b81b-239b40952922\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.494972 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq7hm\" (UniqueName: \"kubernetes.io/projected/7db429cb-d9dd-4122-b81b-239b40952922-kube-api-access-mq7hm\") pod \"7db429cb-d9dd-4122-b81b-239b40952922\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.495050 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-ssh-key-openstack-edpm-ipam\") pod \"7db429cb-d9dd-4122-b81b-239b40952922\" (UID: \"7db429cb-d9dd-4122-b81b-239b40952922\") " Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.504142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db429cb-d9dd-4122-b81b-239b40952922-kube-api-access-mq7hm" (OuterVolumeSpecName: "kube-api-access-mq7hm") pod "7db429cb-d9dd-4122-b81b-239b40952922" (UID: "7db429cb-d9dd-4122-b81b-239b40952922"). InnerVolumeSpecName "kube-api-access-mq7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.525222 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-inventory" (OuterVolumeSpecName: "inventory") pod "7db429cb-d9dd-4122-b81b-239b40952922" (UID: "7db429cb-d9dd-4122-b81b-239b40952922"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.527413 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7db429cb-d9dd-4122-b81b-239b40952922" (UID: "7db429cb-d9dd-4122-b81b-239b40952922"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.597316 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.597456 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq7hm\" (UniqueName: \"kubernetes.io/projected/7db429cb-d9dd-4122-b81b-239b40952922-kube-api-access-mq7hm\") on node \"crc\" DevicePath \"\"" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.597517 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7db429cb-d9dd-4122-b81b-239b40952922-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.961419 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" event={"ID":"7db429cb-d9dd-4122-b81b-239b40952922","Type":"ContainerDied","Data":"a8efbb36c4beeabd6d1999dd997672950ce15db4c0c68279b0801190f07e6029"} Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.961457 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk" Jan 21 17:52:48 crc kubenswrapper[4823]: I0121 17:52:48.961466 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8efbb36c4beeabd6d1999dd997672950ce15db4c0c68279b0801190f07e6029" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.040984 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-k225g"] Jan 21 17:52:49 crc kubenswrapper[4823]: E0121 17:52:49.041918 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db429cb-d9dd-4122-b81b-239b40952922" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.041939 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db429cb-d9dd-4122-b81b-239b40952922" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.042177 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db429cb-d9dd-4122-b81b-239b40952922" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.043008 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.047154 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.047604 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.048311 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.048624 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.056099 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-k225g"] Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.208737 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.208784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.208825 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdqj\" (UniqueName: \"kubernetes.io/projected/d36dcf94-ed56-418c-8e90-7a4da66a51d9-kube-api-access-2rdqj\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.311335 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.311381 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.311418 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdqj\" (UniqueName: \"kubernetes.io/projected/d36dcf94-ed56-418c-8e90-7a4da66a51d9-kube-api-access-2rdqj\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.316411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.316439 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.333381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdqj\" (UniqueName: \"kubernetes.io/projected/d36dcf94-ed56-418c-8e90-7a4da66a51d9-kube-api-access-2rdqj\") pod \"ssh-known-hosts-edpm-deployment-k225g\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.376101 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.934135 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-k225g"] Jan 21 17:52:49 crc kubenswrapper[4823]: I0121 17:52:49.971224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" event={"ID":"d36dcf94-ed56-418c-8e90-7a4da66a51d9","Type":"ContainerStarted","Data":"fdd5a9f508d67fec2e7e64f1b135936acc092d6598d21a7f7660af303060f344"} Jan 21 17:52:50 crc kubenswrapper[4823]: I0121 17:52:50.980414 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" event={"ID":"d36dcf94-ed56-418c-8e90-7a4da66a51d9","Type":"ContainerStarted","Data":"8c155cccdcc113f8ca076ad23538cd2c5df959dd972201e39441546ea961c06b"} Jan 21 17:52:51 crc kubenswrapper[4823]: I0121 17:52:51.000629 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" podStartSLOduration=1.47917653 podStartE2EDuration="2.000610263s" podCreationTimestamp="2026-01-21 17:52:49 +0000 UTC" firstStartedPulling="2026-01-21 17:52:49.933513919 +0000 UTC m=+2170.859644779" lastFinishedPulling="2026-01-21 17:52:50.454947612 +0000 UTC m=+2171.381078512" observedRunningTime="2026-01-21 17:52:50.997307222 +0000 UTC m=+2171.923438082" watchObservedRunningTime="2026-01-21 17:52:51.000610263 +0000 UTC m=+2171.926741123" Jan 21 17:52:57 crc kubenswrapper[4823]: I0121 17:52:57.907975 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2h42f"] Jan 21 17:52:57 crc kubenswrapper[4823]: I0121 17:52:57.911479 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:57 crc kubenswrapper[4823]: I0121 17:52:57.990882 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-utilities\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:57 crc kubenswrapper[4823]: I0121 17:52:57.990978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-catalog-content\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:57 crc kubenswrapper[4823]: I0121 17:52:57.991003 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk9xb\" (UniqueName: \"kubernetes.io/projected/b240d848-5a12-4e71-870d-29637934a76c-kube-api-access-nk9xb\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.093543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-catalog-content\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.093628 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk9xb\" (UniqueName: \"kubernetes.io/projected/b240d848-5a12-4e71-870d-29637934a76c-kube-api-access-nk9xb\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.094061 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-utilities\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.094449 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-catalog-content\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.094496 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-utilities\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.124687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk9xb\" (UniqueName: \"kubernetes.io/projected/b240d848-5a12-4e71-870d-29637934a76c-kube-api-access-nk9xb\") pod \"community-operators-2h42f\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.189613 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2h42f"] Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.241408 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:52:58 crc kubenswrapper[4823]: I0121 17:52:58.721321 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2h42f"] Jan 21 17:52:59 crc kubenswrapper[4823]: I0121 17:52:59.059208 4823 generic.go:334] "Generic (PLEG): container finished" podID="b240d848-5a12-4e71-870d-29637934a76c" containerID="a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2" exitCode=0 Jan 21 17:52:59 crc kubenswrapper[4823]: I0121 17:52:59.059272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerDied","Data":"a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2"} Jan 21 17:52:59 crc kubenswrapper[4823]: I0121 17:52:59.059298 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerStarted","Data":"d216a0e6cf1b0dd051fcd57d2535248b9826cb4774f97d3a37a3b7bdcc8f9ce1"} Jan 21 17:52:59 crc kubenswrapper[4823]: I0121 17:52:59.062965 4823 generic.go:334] "Generic (PLEG): container finished" podID="d36dcf94-ed56-418c-8e90-7a4da66a51d9" containerID="8c155cccdcc113f8ca076ad23538cd2c5df959dd972201e39441546ea961c06b" exitCode=0 Jan 21 17:52:59 crc kubenswrapper[4823]: I0121 17:52:59.063000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" event={"ID":"d36dcf94-ed56-418c-8e90-7a4da66a51d9","Type":"ContainerDied","Data":"8c155cccdcc113f8ca076ad23538cd2c5df959dd972201e39441546ea961c06b"} Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.507588 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.570990 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-inventory-0\") pod \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.571131 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-ssh-key-openstack-edpm-ipam\") pod \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.571165 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdqj\" (UniqueName: \"kubernetes.io/projected/d36dcf94-ed56-418c-8e90-7a4da66a51d9-kube-api-access-2rdqj\") pod \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\" (UID: \"d36dcf94-ed56-418c-8e90-7a4da66a51d9\") " Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.577365 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36dcf94-ed56-418c-8e90-7a4da66a51d9-kube-api-access-2rdqj" (OuterVolumeSpecName: "kube-api-access-2rdqj") pod "d36dcf94-ed56-418c-8e90-7a4da66a51d9" (UID: "d36dcf94-ed56-418c-8e90-7a4da66a51d9"). InnerVolumeSpecName "kube-api-access-2rdqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.600331 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d36dcf94-ed56-418c-8e90-7a4da66a51d9" (UID: "d36dcf94-ed56-418c-8e90-7a4da66a51d9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.603960 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d36dcf94-ed56-418c-8e90-7a4da66a51d9" (UID: "d36dcf94-ed56-418c-8e90-7a4da66a51d9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.675295 4823 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.675349 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d36dcf94-ed56-418c-8e90-7a4da66a51d9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:00 crc kubenswrapper[4823]: I0121 17:53:00.675370 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdqj\" (UniqueName: \"kubernetes.io/projected/d36dcf94-ed56-418c-8e90-7a4da66a51d9-kube-api-access-2rdqj\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.088722 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerStarted","Data":"5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b"} Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.093413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" event={"ID":"d36dcf94-ed56-418c-8e90-7a4da66a51d9","Type":"ContainerDied","Data":"fdd5a9f508d67fec2e7e64f1b135936acc092d6598d21a7f7660af303060f344"} Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.093470 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd5a9f508d67fec2e7e64f1b135936acc092d6598d21a7f7660af303060f344" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.093554 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-k225g" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.210518 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k"] Jan 21 17:53:01 crc kubenswrapper[4823]: E0121 17:53:01.210914 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36dcf94-ed56-418c-8e90-7a4da66a51d9" containerName="ssh-known-hosts-edpm-deployment" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.210933 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36dcf94-ed56-418c-8e90-7a4da66a51d9" containerName="ssh-known-hosts-edpm-deployment" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.211103 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36dcf94-ed56-418c-8e90-7a4da66a51d9" containerName="ssh-known-hosts-edpm-deployment" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.211763 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.215514 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.215579 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.216296 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.216430 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.220809 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k"] Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.289338 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xw4s\" (UniqueName: \"kubernetes.io/projected/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-kube-api-access-8xw4s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.289554 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.289629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.391328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xw4s\" (UniqueName: \"kubernetes.io/projected/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-kube-api-access-8xw4s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.391437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.391491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.395767 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.396974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.408249 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xw4s\" (UniqueName: \"kubernetes.io/projected/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-kube-api-access-8xw4s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rwc2k\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:01 crc kubenswrapper[4823]: I0121 17:53:01.532919 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:02 crc kubenswrapper[4823]: I0121 17:53:02.069233 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k"] Jan 21 17:53:02 crc kubenswrapper[4823]: W0121 17:53:02.077841 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc18e3f_f804_4b4a_abda_ccf74452f2c6.slice/crio-c97e0c003cd637887e5793174daf30d19a0a92fdfbc3c3bbba9406a14aa76b27 WatchSource:0}: Error finding container c97e0c003cd637887e5793174daf30d19a0a92fdfbc3c3bbba9406a14aa76b27: Status 404 returned error can't find the container with id c97e0c003cd637887e5793174daf30d19a0a92fdfbc3c3bbba9406a14aa76b27 Jan 21 17:53:02 crc kubenswrapper[4823]: I0121 17:53:02.112144 4823 generic.go:334] "Generic (PLEG): container finished" podID="b240d848-5a12-4e71-870d-29637934a76c" containerID="5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b" exitCode=0 Jan 21 17:53:02 crc kubenswrapper[4823]: I0121 17:53:02.112756 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerDied","Data":"5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b"} Jan 21 17:53:02 crc kubenswrapper[4823]: I0121 17:53:02.114358 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" event={"ID":"2fc18e3f-f804-4b4a-abda-ccf74452f2c6","Type":"ContainerStarted","Data":"c97e0c003cd637887e5793174daf30d19a0a92fdfbc3c3bbba9406a14aa76b27"} Jan 21 17:53:03 crc kubenswrapper[4823]: I0121 17:53:03.124169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerStarted","Data":"7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5"} Jan 21 17:53:03 crc kubenswrapper[4823]: I0121 17:53:03.125758 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" event={"ID":"2fc18e3f-f804-4b4a-abda-ccf74452f2c6","Type":"ContainerStarted","Data":"01e2e8092493f4d60dc3f9bc291bc7e0718817d44f46b8142b50fc42dfe618e2"} Jan 21 17:53:03 crc kubenswrapper[4823]: I0121 17:53:03.144917 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2h42f" podStartSLOduration=2.555188961 podStartE2EDuration="6.144881377s" podCreationTimestamp="2026-01-21 17:52:57 +0000 UTC" firstStartedPulling="2026-01-21 17:52:59.062194969 +0000 UTC m=+2179.988325829" lastFinishedPulling="2026-01-21 17:53:02.651887385 +0000 UTC m=+2183.578018245" observedRunningTime="2026-01-21 17:53:03.139587447 +0000 UTC m=+2184.065718327" watchObservedRunningTime="2026-01-21 17:53:03.144881377 +0000 UTC m=+2184.071012237" Jan 21 17:53:03 crc kubenswrapper[4823]: I0121 17:53:03.167634 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" podStartSLOduration=1.500308436 podStartE2EDuration="2.167606528s" podCreationTimestamp="2026-01-21 17:53:01 +0000 UTC" firstStartedPulling="2026-01-21 17:53:02.079997987 +0000 UTC m=+2183.006128847" lastFinishedPulling="2026-01-21 17:53:02.747296079 +0000 UTC m=+2183.673426939" observedRunningTime="2026-01-21 17:53:03.158928424 +0000 UTC m=+2184.085059294" watchObservedRunningTime="2026-01-21 17:53:03.167606528 +0000 UTC m=+2184.093737388" Jan 21 17:53:08 crc kubenswrapper[4823]: I0121 17:53:08.242158 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:53:08 crc kubenswrapper[4823]: I0121 17:53:08.242836 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:53:08 crc kubenswrapper[4823]: I0121 17:53:08.298472 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:53:09 crc kubenswrapper[4823]: I0121 17:53:09.232624 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:53:09 crc kubenswrapper[4823]: I0121 17:53:09.282695 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2h42f"] Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.206124 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2h42f" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="registry-server" containerID="cri-o://7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5" gracePeriod=2 Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.690108 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.745377 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-utilities\") pod \"b240d848-5a12-4e71-870d-29637934a76c\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.745456 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk9xb\" (UniqueName: \"kubernetes.io/projected/b240d848-5a12-4e71-870d-29637934a76c-kube-api-access-nk9xb\") pod \"b240d848-5a12-4e71-870d-29637934a76c\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.745509 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-catalog-content\") pod \"b240d848-5a12-4e71-870d-29637934a76c\" (UID: \"b240d848-5a12-4e71-870d-29637934a76c\") " Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.746463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-utilities" (OuterVolumeSpecName: "utilities") pod "b240d848-5a12-4e71-870d-29637934a76c" (UID: "b240d848-5a12-4e71-870d-29637934a76c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.751689 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b240d848-5a12-4e71-870d-29637934a76c-kube-api-access-nk9xb" (OuterVolumeSpecName: "kube-api-access-nk9xb") pod "b240d848-5a12-4e71-870d-29637934a76c" (UID: "b240d848-5a12-4e71-870d-29637934a76c"). InnerVolumeSpecName "kube-api-access-nk9xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.801137 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b240d848-5a12-4e71-870d-29637934a76c" (UID: "b240d848-5a12-4e71-870d-29637934a76c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.847905 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.847941 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk9xb\" (UniqueName: \"kubernetes.io/projected/b240d848-5a12-4e71-870d-29637934a76c-kube-api-access-nk9xb\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:11 crc kubenswrapper[4823]: I0121 17:53:11.847954 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b240d848-5a12-4e71-870d-29637934a76c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.217033 4823 generic.go:334] "Generic (PLEG): container finished" podID="b240d848-5a12-4e71-870d-29637934a76c" containerID="7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5" exitCode=0 Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.217077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerDied","Data":"7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5"} Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.217090 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2h42f" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.217102 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2h42f" event={"ID":"b240d848-5a12-4e71-870d-29637934a76c","Type":"ContainerDied","Data":"d216a0e6cf1b0dd051fcd57d2535248b9826cb4774f97d3a37a3b7bdcc8f9ce1"} Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.217122 4823 scope.go:117] "RemoveContainer" containerID="7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.242445 4823 scope.go:117] "RemoveContainer" containerID="5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.257151 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2h42f"] Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.264148 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2h42f"] Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.275525 4823 scope.go:117] "RemoveContainer" containerID="a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.310141 4823 scope.go:117] "RemoveContainer" containerID="7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5" Jan 21 17:53:12 crc kubenswrapper[4823]: E0121 17:53:12.310598 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5\": container with ID starting with 7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5 not found: ID does not exist" containerID="7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.310643 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5"} err="failed to get container status \"7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5\": rpc error: code = NotFound desc = could not find container \"7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5\": container with ID starting with 7d596f6a427d5df0d04b857d2d5bc056799845855304cab01ff75c22dcc11cf5 not found: ID does not exist" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.310662 4823 scope.go:117] "RemoveContainer" containerID="5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b" Jan 21 17:53:12 crc kubenswrapper[4823]: E0121 17:53:12.310846 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b\": container with ID starting with 5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b not found: ID does not exist" containerID="5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.310873 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b"} err="failed to get container status \"5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b\": rpc error: code = NotFound desc = could not find container \"5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b\": container with ID starting with 5971b27a54c0452defa84474184ae8b6d9a630d1d7193d4c73e8836c6c434f9b not found: ID does not exist" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.310883 4823 scope.go:117] "RemoveContainer" containerID="a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2" Jan 21 17:53:12 crc kubenswrapper[4823]: E0121 17:53:12.311162 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2\": container with ID starting with a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2 not found: ID does not exist" containerID="a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2" Jan 21 17:53:12 crc kubenswrapper[4823]: I0121 17:53:12.311177 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2"} err="failed to get container status \"a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2\": rpc error: code = NotFound desc = could not find container \"a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2\": container with ID starting with a3b20dc030f2b623278677afd94bbad3e22f20a81c1ad99088a839fa73851cd2 not found: ID does not exist" Jan 21 17:53:13 crc kubenswrapper[4823]: I0121 17:53:13.229403 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fc18e3f-f804-4b4a-abda-ccf74452f2c6" containerID="01e2e8092493f4d60dc3f9bc291bc7e0718817d44f46b8142b50fc42dfe618e2" exitCode=0 Jan 21 17:53:13 crc kubenswrapper[4823]: I0121 17:53:13.229489 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" event={"ID":"2fc18e3f-f804-4b4a-abda-ccf74452f2c6","Type":"ContainerDied","Data":"01e2e8092493f4d60dc3f9bc291bc7e0718817d44f46b8142b50fc42dfe618e2"} Jan 21 17:53:13 crc kubenswrapper[4823]: I0121 17:53:13.353695 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b240d848-5a12-4e71-870d-29637934a76c" path="/var/lib/kubelet/pods/b240d848-5a12-4e71-870d-29637934a76c/volumes" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.630090 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.704331 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-inventory\") pod \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.704432 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-ssh-key-openstack-edpm-ipam\") pod \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.704616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xw4s\" (UniqueName: \"kubernetes.io/projected/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-kube-api-access-8xw4s\") pod \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\" (UID: \"2fc18e3f-f804-4b4a-abda-ccf74452f2c6\") " Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.712164 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-kube-api-access-8xw4s" (OuterVolumeSpecName: "kube-api-access-8xw4s") pod "2fc18e3f-f804-4b4a-abda-ccf74452f2c6" (UID: "2fc18e3f-f804-4b4a-abda-ccf74452f2c6"). InnerVolumeSpecName "kube-api-access-8xw4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.750026 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-inventory" (OuterVolumeSpecName: "inventory") pod "2fc18e3f-f804-4b4a-abda-ccf74452f2c6" (UID: "2fc18e3f-f804-4b4a-abda-ccf74452f2c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.761019 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2fc18e3f-f804-4b4a-abda-ccf74452f2c6" (UID: "2fc18e3f-f804-4b4a-abda-ccf74452f2c6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.816589 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xw4s\" (UniqueName: \"kubernetes.io/projected/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-kube-api-access-8xw4s\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.816643 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:14 crc kubenswrapper[4823]: I0121 17:53:14.816662 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc18e3f-f804-4b4a-abda-ccf74452f2c6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.070083 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.070437 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.070476 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.071268 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.071332 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" gracePeriod=600 Jan 21 17:53:15 crc kubenswrapper[4823]: E0121 17:53:15.195177 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.246245 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.246235 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rwc2k" event={"ID":"2fc18e3f-f804-4b4a-abda-ccf74452f2c6","Type":"ContainerDied","Data":"c97e0c003cd637887e5793174daf30d19a0a92fdfbc3c3bbba9406a14aa76b27"} Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.246399 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c97e0c003cd637887e5793174daf30d19a0a92fdfbc3c3bbba9406a14aa76b27" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.248898 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" exitCode=0 Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.248938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce"} Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.248995 4823 scope.go:117] "RemoveContainer" containerID="6730de56359999d27e136e7b8e170ebada5c65c8e7dc82dfa41107b0b851dfa2" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.249481 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:53:15 crc kubenswrapper[4823]: E0121 17:53:15.249898 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.369355 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm"] Jan 21 17:53:15 crc kubenswrapper[4823]: E0121 17:53:15.369642 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc18e3f-f804-4b4a-abda-ccf74452f2c6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.369653 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc18e3f-f804-4b4a-abda-ccf74452f2c6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:53:15 crc kubenswrapper[4823]: E0121 17:53:15.369665 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="extract-utilities" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.369671 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="extract-utilities" Jan 21 17:53:15 crc kubenswrapper[4823]: E0121 17:53:15.369953 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="extract-content" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.369988 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="extract-content" Jan 21 17:53:15 crc kubenswrapper[4823]: E0121 17:53:15.369998 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="registry-server" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.370005 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="registry-server" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.370188 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc18e3f-f804-4b4a-abda-ccf74452f2c6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.370215 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b240d848-5a12-4e71-870d-29637934a76c" containerName="registry-server" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.371023 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm"] Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.371098 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.373325 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.373793 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.374039 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.374003 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.429823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.429892 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.430346 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtsgb\" (UniqueName: \"kubernetes.io/projected/d5bfda69-3639-47e3-b736-c3693e826852-kube-api-access-mtsgb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.532562 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtsgb\" (UniqueName: \"kubernetes.io/projected/d5bfda69-3639-47e3-b736-c3693e826852-kube-api-access-mtsgb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.532677 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.532721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.538202 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.539396 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.551502 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtsgb\" (UniqueName: \"kubernetes.io/projected/d5bfda69-3639-47e3-b736-c3693e826852-kube-api-access-mtsgb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:15 crc kubenswrapper[4823]: I0121 17:53:15.691762 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:16 crc kubenswrapper[4823]: I0121 17:53:16.203355 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm"] Jan 21 17:53:16 crc kubenswrapper[4823]: W0121 17:53:16.204000 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5bfda69_3639_47e3_b736_c3693e826852.slice/crio-d4c9ff27166da1d1b98ec7bd93a1b499ec25e132cdd9f029277e9c423011f549 WatchSource:0}: Error finding container d4c9ff27166da1d1b98ec7bd93a1b499ec25e132cdd9f029277e9c423011f549: Status 404 returned error can't find the container with id d4c9ff27166da1d1b98ec7bd93a1b499ec25e132cdd9f029277e9c423011f549 Jan 21 17:53:16 crc kubenswrapper[4823]: I0121 17:53:16.206816 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:53:16 crc kubenswrapper[4823]: I0121 17:53:16.265079 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" event={"ID":"d5bfda69-3639-47e3-b736-c3693e826852","Type":"ContainerStarted","Data":"d4c9ff27166da1d1b98ec7bd93a1b499ec25e132cdd9f029277e9c423011f549"} Jan 21 17:53:17 crc kubenswrapper[4823]: I0121 17:53:17.278489 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" event={"ID":"d5bfda69-3639-47e3-b736-c3693e826852","Type":"ContainerStarted","Data":"90a6d6abf5231d66987caba4cf4585cccc4d426fcb5474b3772ec12f47cd0af3"} Jan 21 17:53:17 crc kubenswrapper[4823]: I0121 17:53:17.295456 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" podStartSLOduration=1.824140198 podStartE2EDuration="2.295437535s" podCreationTimestamp="2026-01-21 17:53:15 +0000 UTC" firstStartedPulling="2026-01-21 17:53:16.206638184 +0000 UTC m=+2197.132769044" lastFinishedPulling="2026-01-21 17:53:16.677935521 +0000 UTC m=+2197.604066381" observedRunningTime="2026-01-21 17:53:17.294207904 +0000 UTC m=+2198.220338774" watchObservedRunningTime="2026-01-21 17:53:17.295437535 +0000 UTC m=+2198.221568395" Jan 21 17:53:27 crc kubenswrapper[4823]: I0121 17:53:27.343300 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:53:27 crc kubenswrapper[4823]: E0121 17:53:27.344131 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:53:27 crc kubenswrapper[4823]: I0121 17:53:27.391054 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5bfda69-3639-47e3-b736-c3693e826852" containerID="90a6d6abf5231d66987caba4cf4585cccc4d426fcb5474b3772ec12f47cd0af3" exitCode=0 Jan 21 17:53:27 crc kubenswrapper[4823]: I0121 17:53:27.391128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" event={"ID":"d5bfda69-3639-47e3-b736-c3693e826852","Type":"ContainerDied","Data":"90a6d6abf5231d66987caba4cf4585cccc4d426fcb5474b3772ec12f47cd0af3"} Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.810236 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.823444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-inventory\") pod \"d5bfda69-3639-47e3-b736-c3693e826852\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.823525 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtsgb\" (UniqueName: \"kubernetes.io/projected/d5bfda69-3639-47e3-b736-c3693e826852-kube-api-access-mtsgb\") pod \"d5bfda69-3639-47e3-b736-c3693e826852\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.823630 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-ssh-key-openstack-edpm-ipam\") pod \"d5bfda69-3639-47e3-b736-c3693e826852\" (UID: \"d5bfda69-3639-47e3-b736-c3693e826852\") " Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.830233 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bfda69-3639-47e3-b736-c3693e826852-kube-api-access-mtsgb" (OuterVolumeSpecName: "kube-api-access-mtsgb") pod "d5bfda69-3639-47e3-b736-c3693e826852" (UID: "d5bfda69-3639-47e3-b736-c3693e826852"). InnerVolumeSpecName "kube-api-access-mtsgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.860456 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d5bfda69-3639-47e3-b736-c3693e826852" (UID: "d5bfda69-3639-47e3-b736-c3693e826852"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.879782 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-inventory" (OuterVolumeSpecName: "inventory") pod "d5bfda69-3639-47e3-b736-c3693e826852" (UID: "d5bfda69-3639-47e3-b736-c3693e826852"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.925491 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtsgb\" (UniqueName: \"kubernetes.io/projected/d5bfda69-3639-47e3-b736-c3693e826852-kube-api-access-mtsgb\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.925524 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:28 crc kubenswrapper[4823]: I0121 17:53:28.925534 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5bfda69-3639-47e3-b736-c3693e826852-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.424563 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" event={"ID":"d5bfda69-3639-47e3-b736-c3693e826852","Type":"ContainerDied","Data":"d4c9ff27166da1d1b98ec7bd93a1b499ec25e132cdd9f029277e9c423011f549"} Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.424945 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c9ff27166da1d1b98ec7bd93a1b499ec25e132cdd9f029277e9c423011f549" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.424725 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.506418 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2"] Jan 21 17:53:29 crc kubenswrapper[4823]: E0121 17:53:29.506789 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bfda69-3639-47e3-b736-c3693e826852" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.506807 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bfda69-3639-47e3-b736-c3693e826852" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.507064 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5bfda69-3639-47e3-b736-c3693e826852" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.507716 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.511584 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.511891 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.512016 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.512136 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.512266 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.514522 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.514808 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.514865 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.521626 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2"] Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537040 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537096 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537243 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537312 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpjt\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-kube-api-access-8jpjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537525 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537582 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537641 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537723 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.537790 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639085 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639137 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639156 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639186 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639340 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639374 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639505 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpjt\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-kube-api-access-8jpjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639584 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.639620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.645911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.646206 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.647298 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.647662 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.648198 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.648294 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.649255 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.649472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.650802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.651360 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.652353 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.654201 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.655191 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.665266 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpjt\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-kube-api-access-8jpjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x5db2\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:29 crc kubenswrapper[4823]: I0121 17:53:29.825968 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:53:30 crc kubenswrapper[4823]: I0121 17:53:30.340222 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2"] Jan 21 17:53:30 crc kubenswrapper[4823]: I0121 17:53:30.432721 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" event={"ID":"3a7db3fc-7bbd-447a-b46d-bea0ab214e38","Type":"ContainerStarted","Data":"b83fe2402e9bfdcac78b54017d2cad24e4f2ee0e2b995d5a712510f03aa37e11"} Jan 21 17:53:32 crc kubenswrapper[4823]: I0121 17:53:32.455654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" event={"ID":"3a7db3fc-7bbd-447a-b46d-bea0ab214e38","Type":"ContainerStarted","Data":"e8cd1dc14859c10744d79739d1790206ad253c56c777257132d0367f94a6bad3"} Jan 21 17:53:32 crc kubenswrapper[4823]: I0121 17:53:32.479151 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" podStartSLOduration=2.373577286 podStartE2EDuration="3.47913547s" podCreationTimestamp="2026-01-21 17:53:29 +0000 UTC" firstStartedPulling="2026-01-21 17:53:30.352921586 +0000 UTC m=+2211.279052446" lastFinishedPulling="2026-01-21 17:53:31.45847977 +0000 UTC m=+2212.384610630" observedRunningTime="2026-01-21 17:53:32.474025514 +0000 UTC m=+2213.400156374" watchObservedRunningTime="2026-01-21 17:53:32.47913547 +0000 UTC m=+2213.405266330" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.520737 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5lbq7"] Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.523282 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.566112 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5lbq7"] Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.685387 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-catalog-content\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.685775 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-utilities\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.685983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvwhn\" (UniqueName: \"kubernetes.io/projected/055182a6-5706-4701-b0ae-8d62f344a048-kube-api-access-tvwhn\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.788380 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-catalog-content\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.788537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-utilities\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.788598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvwhn\" (UniqueName: \"kubernetes.io/projected/055182a6-5706-4701-b0ae-8d62f344a048-kube-api-access-tvwhn\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.789071 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-catalog-content\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.789324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-utilities\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.818735 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvwhn\" (UniqueName: \"kubernetes.io/projected/055182a6-5706-4701-b0ae-8d62f344a048-kube-api-access-tvwhn\") pod \"certified-operators-5lbq7\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:36 crc kubenswrapper[4823]: I0121 17:53:36.870946 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:37 crc kubenswrapper[4823]: I0121 17:53:37.409531 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5lbq7"] Jan 21 17:53:37 crc kubenswrapper[4823]: I0121 17:53:37.500933 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerStarted","Data":"cc2924432289a89e25715a784a6af16b639df93a162e14ef7ffe3122f2957958"} Jan 21 17:53:38 crc kubenswrapper[4823]: I0121 17:53:38.511454 4823 generic.go:334] "Generic (PLEG): container finished" podID="055182a6-5706-4701-b0ae-8d62f344a048" containerID="748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861" exitCode=0 Jan 21 17:53:38 crc kubenswrapper[4823]: I0121 17:53:38.511534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerDied","Data":"748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861"} Jan 21 17:53:39 crc kubenswrapper[4823]: I0121 17:53:39.521356 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerStarted","Data":"1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3"} Jan 21 17:53:40 crc kubenswrapper[4823]: I0121 17:53:40.531699 4823 generic.go:334] "Generic (PLEG): container finished" podID="055182a6-5706-4701-b0ae-8d62f344a048" containerID="1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3" exitCode=0 Jan 21 17:53:40 crc kubenswrapper[4823]: I0121 17:53:40.531799 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerDied","Data":"1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3"} Jan 21 17:53:41 crc kubenswrapper[4823]: I0121 17:53:41.541931 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerStarted","Data":"a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78"} Jan 21 17:53:41 crc kubenswrapper[4823]: I0121 17:53:41.563401 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5lbq7" podStartSLOduration=2.99692076 podStartE2EDuration="5.563378033s" podCreationTimestamp="2026-01-21 17:53:36 +0000 UTC" firstStartedPulling="2026-01-21 17:53:38.513670968 +0000 UTC m=+2219.439801828" lastFinishedPulling="2026-01-21 17:53:41.080128241 +0000 UTC m=+2222.006259101" observedRunningTime="2026-01-21 17:53:41.561134468 +0000 UTC m=+2222.487265328" watchObservedRunningTime="2026-01-21 17:53:41.563378033 +0000 UTC m=+2222.489508903" Jan 21 17:53:42 crc kubenswrapper[4823]: I0121 17:53:42.344201 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:53:42 crc kubenswrapper[4823]: E0121 17:53:42.344431 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.211406 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vhp26"] Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.214107 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.224164 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhp26"] Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.399890 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-catalog-content\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.400391 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-utilities\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.400657 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fsn\" (UniqueName: \"kubernetes.io/projected/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-kube-api-access-c7fsn\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.502189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-catalog-content\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.502381 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-utilities\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.502475 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fsn\" (UniqueName: \"kubernetes.io/projected/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-kube-api-access-c7fsn\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.502740 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-catalog-content\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.503125 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-utilities\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.524955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fsn\" (UniqueName: \"kubernetes.io/projected/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-kube-api-access-c7fsn\") pod \"redhat-marketplace-vhp26\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.541697 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.871740 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.873779 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:46 crc kubenswrapper[4823]: I0121 17:53:46.918494 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:47 crc kubenswrapper[4823]: I0121 17:53:47.002713 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhp26"] Jan 21 17:53:47 crc kubenswrapper[4823]: W0121 17:53:47.004973 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaee2ff03_6ecf_453e_94d7_7dd8fb59e3d5.slice/crio-97fd60e9092ef62ad441e101072010b719985a95d67d1ab83abb5b13dce9db05 WatchSource:0}: Error finding container 97fd60e9092ef62ad441e101072010b719985a95d67d1ab83abb5b13dce9db05: Status 404 returned error can't find the container with id 97fd60e9092ef62ad441e101072010b719985a95d67d1ab83abb5b13dce9db05 Jan 21 17:53:47 crc kubenswrapper[4823]: I0121 17:53:47.592604 4823 generic.go:334] "Generic (PLEG): container finished" podID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerID="12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198" exitCode=0 Jan 21 17:53:47 crc kubenswrapper[4823]: I0121 17:53:47.592658 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhp26" event={"ID":"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5","Type":"ContainerDied","Data":"12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198"} Jan 21 17:53:47 crc kubenswrapper[4823]: I0121 17:53:47.592998 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhp26" event={"ID":"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5","Type":"ContainerStarted","Data":"97fd60e9092ef62ad441e101072010b719985a95d67d1ab83abb5b13dce9db05"} Jan 21 17:53:47 crc kubenswrapper[4823]: I0121 17:53:47.647978 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:49 crc kubenswrapper[4823]: I0121 17:53:49.175643 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5lbq7"] Jan 21 17:53:49 crc kubenswrapper[4823]: I0121 17:53:49.612606 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5lbq7" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="registry-server" containerID="cri-o://a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78" gracePeriod=2 Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.283477 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.405950 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-catalog-content\") pod \"055182a6-5706-4701-b0ae-8d62f344a048\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.406185 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvwhn\" (UniqueName: \"kubernetes.io/projected/055182a6-5706-4701-b0ae-8d62f344a048-kube-api-access-tvwhn\") pod \"055182a6-5706-4701-b0ae-8d62f344a048\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.406216 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-utilities\") pod \"055182a6-5706-4701-b0ae-8d62f344a048\" (UID: \"055182a6-5706-4701-b0ae-8d62f344a048\") " Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.407182 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-utilities" (OuterVolumeSpecName: "utilities") pod "055182a6-5706-4701-b0ae-8d62f344a048" (UID: "055182a6-5706-4701-b0ae-8d62f344a048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.411584 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055182a6-5706-4701-b0ae-8d62f344a048-kube-api-access-tvwhn" (OuterVolumeSpecName: "kube-api-access-tvwhn") pod "055182a6-5706-4701-b0ae-8d62f344a048" (UID: "055182a6-5706-4701-b0ae-8d62f344a048"). InnerVolumeSpecName "kube-api-access-tvwhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.451044 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "055182a6-5706-4701-b0ae-8d62f344a048" (UID: "055182a6-5706-4701-b0ae-8d62f344a048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.510968 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvwhn\" (UniqueName: \"kubernetes.io/projected/055182a6-5706-4701-b0ae-8d62f344a048-kube-api-access-tvwhn\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.511003 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.511015 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055182a6-5706-4701-b0ae-8d62f344a048-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.631130 4823 generic.go:334] "Generic (PLEG): container finished" podID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerID="5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695" exitCode=0 Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.631202 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhp26" event={"ID":"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5","Type":"ContainerDied","Data":"5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695"} Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.636562 4823 generic.go:334] "Generic (PLEG): container finished" podID="055182a6-5706-4701-b0ae-8d62f344a048" containerID="a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78" exitCode=0 Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.636615 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbq7" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.636606 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerDied","Data":"a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78"} Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.636800 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbq7" event={"ID":"055182a6-5706-4701-b0ae-8d62f344a048","Type":"ContainerDied","Data":"cc2924432289a89e25715a784a6af16b639df93a162e14ef7ffe3122f2957958"} Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.636846 4823 scope.go:117] "RemoveContainer" containerID="a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.672350 4823 scope.go:117] "RemoveContainer" containerID="1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.674621 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5lbq7"] Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.682220 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5lbq7"] Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.693702 4823 scope.go:117] "RemoveContainer" containerID="748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.746151 4823 scope.go:117] "RemoveContainer" containerID="a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78" Jan 21 17:53:51 crc kubenswrapper[4823]: E0121 17:53:51.746669 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78\": container with ID starting with a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78 not found: ID does not exist" containerID="a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.746705 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78"} err="failed to get container status \"a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78\": rpc error: code = NotFound desc = could not find container \"a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78\": container with ID starting with a6e2e3860badfff44120565f0f297807d0bf40cf721679500603d08fb85e3d78 not found: ID does not exist" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.746731 4823 scope.go:117] "RemoveContainer" containerID="1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3" Jan 21 17:53:51 crc kubenswrapper[4823]: E0121 17:53:51.747135 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3\": container with ID starting with 1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3 not found: ID does not exist" containerID="1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.747178 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3"} err="failed to get container status \"1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3\": rpc error: code = NotFound desc = could not find container \"1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3\": container with ID starting with 1017b023d4f22521e442f1506949e1f21210694736668ff617e3e1765c88dcd3 not found: ID does not exist" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.747204 4823 scope.go:117] "RemoveContainer" containerID="748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861" Jan 21 17:53:51 crc kubenswrapper[4823]: E0121 17:53:51.747589 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861\": container with ID starting with 748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861 not found: ID does not exist" containerID="748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861" Jan 21 17:53:51 crc kubenswrapper[4823]: I0121 17:53:51.747694 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861"} err="failed to get container status \"748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861\": rpc error: code = NotFound desc = could not find container \"748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861\": container with ID starting with 748529df7e63931fcb4ae13d9a14c7be6082c9664721b5eae2b12df145ce9861 not found: ID does not exist" Jan 21 17:53:52 crc kubenswrapper[4823]: I0121 17:53:52.649014 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhp26" event={"ID":"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5","Type":"ContainerStarted","Data":"48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784"} Jan 21 17:53:52 crc kubenswrapper[4823]: I0121 17:53:52.673319 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vhp26" podStartSLOduration=2.239627432 podStartE2EDuration="6.673293089s" podCreationTimestamp="2026-01-21 17:53:46 +0000 UTC" firstStartedPulling="2026-01-21 17:53:47.594408005 +0000 UTC m=+2228.520538865" lastFinishedPulling="2026-01-21 17:53:52.028073672 +0000 UTC m=+2232.954204522" observedRunningTime="2026-01-21 17:53:52.665405294 +0000 UTC m=+2233.591536174" watchObservedRunningTime="2026-01-21 17:53:52.673293089 +0000 UTC m=+2233.599423949" Jan 21 17:53:53 crc kubenswrapper[4823]: I0121 17:53:53.343309 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:53:53 crc kubenswrapper[4823]: E0121 17:53:53.343706 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:53:53 crc kubenswrapper[4823]: I0121 17:53:53.355998 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055182a6-5706-4701-b0ae-8d62f344a048" path="/var/lib/kubelet/pods/055182a6-5706-4701-b0ae-8d62f344a048/volumes" Jan 21 17:53:56 crc kubenswrapper[4823]: I0121 17:53:56.542760 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:56 crc kubenswrapper[4823]: I0121 17:53:56.543163 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:53:56 crc kubenswrapper[4823]: I0121 17:53:56.586603 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:54:06 crc kubenswrapper[4823]: I0121 17:54:06.588772 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:54:06 crc kubenswrapper[4823]: I0121 17:54:06.721034 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhp26"] Jan 21 17:54:06 crc kubenswrapper[4823]: I0121 17:54:06.774329 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vhp26" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="registry-server" containerID="cri-o://48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784" gracePeriod=2 Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.243909 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.346800 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:54:07 crc kubenswrapper[4823]: E0121 17:54:07.347177 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.430531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-utilities\") pod \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.430586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7fsn\" (UniqueName: \"kubernetes.io/projected/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-kube-api-access-c7fsn\") pod \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.430884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-catalog-content\") pod \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\" (UID: \"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5\") " Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.431670 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-utilities" (OuterVolumeSpecName: "utilities") pod "aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" (UID: "aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.438771 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-kube-api-access-c7fsn" (OuterVolumeSpecName: "kube-api-access-c7fsn") pod "aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" (UID: "aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5"). InnerVolumeSpecName "kube-api-access-c7fsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.457775 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" (UID: "aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.533403 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.533446 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.533460 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7fsn\" (UniqueName: \"kubernetes.io/projected/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5-kube-api-access-c7fsn\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.787560 4823 generic.go:334] "Generic (PLEG): container finished" podID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerID="48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784" exitCode=0 Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.787610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhp26" event={"ID":"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5","Type":"ContainerDied","Data":"48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784"} Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.787646 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhp26" event={"ID":"aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5","Type":"ContainerDied","Data":"97fd60e9092ef62ad441e101072010b719985a95d67d1ab83abb5b13dce9db05"} Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.787673 4823 scope.go:117] "RemoveContainer" containerID="48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.787936 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhp26" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.817238 4823 scope.go:117] "RemoveContainer" containerID="5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.831789 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhp26"] Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.840777 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhp26"] Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.856105 4823 scope.go:117] "RemoveContainer" containerID="12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.885229 4823 scope.go:117] "RemoveContainer" containerID="48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784" Jan 21 17:54:07 crc kubenswrapper[4823]: E0121 17:54:07.885591 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784\": container with ID starting with 48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784 not found: ID does not exist" containerID="48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.885621 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784"} err="failed to get container status \"48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784\": rpc error: code = NotFound desc = could not find container \"48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784\": container with ID starting with 48e5a782421458f9f7d572f502f5990d1bfa34c270772cb03fecf185e3987784 not found: ID does not exist" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.885641 4823 scope.go:117] "RemoveContainer" containerID="5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695" Jan 21 17:54:07 crc kubenswrapper[4823]: E0121 17:54:07.886263 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695\": container with ID starting with 5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695 not found: ID does not exist" containerID="5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.886314 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695"} err="failed to get container status \"5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695\": rpc error: code = NotFound desc = could not find container \"5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695\": container with ID starting with 5d8a57e56a9f0b0e599fc24cba986e1461dde2f6e0557a5802be3e7737809695 not found: ID does not exist" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.886331 4823 scope.go:117] "RemoveContainer" containerID="12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198" Jan 21 17:54:07 crc kubenswrapper[4823]: E0121 17:54:07.886838 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198\": container with ID starting with 12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198 not found: ID does not exist" containerID="12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198" Jan 21 17:54:07 crc kubenswrapper[4823]: I0121 17:54:07.886899 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198"} err="failed to get container status \"12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198\": rpc error: code = NotFound desc = could not find container \"12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198\": container with ID starting with 12d541df2accca90872e61e89070df48613b2030e598a342140c94ffb5276198 not found: ID does not exist" Jan 21 17:54:08 crc kubenswrapper[4823]: I0121 17:54:08.806740 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a7db3fc-7bbd-447a-b46d-bea0ab214e38" containerID="e8cd1dc14859c10744d79739d1790206ad253c56c777257132d0367f94a6bad3" exitCode=0 Jan 21 17:54:08 crc kubenswrapper[4823]: I0121 17:54:08.806840 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" event={"ID":"3a7db3fc-7bbd-447a-b46d-bea0ab214e38","Type":"ContainerDied","Data":"e8cd1dc14859c10744d79739d1790206ad253c56c777257132d0367f94a6bad3"} Jan 21 17:54:09 crc kubenswrapper[4823]: I0121 17:54:09.356726 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" path="/var/lib/kubelet/pods/aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5/volumes" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.220465 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386197 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-libvirt-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386275 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-telemetry-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386377 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-repo-setup-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386415 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386537 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-neutron-metadata-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386578 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-nova-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-inventory\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386655 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ovn-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386752 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-bootstrap-combined-ca-bundle\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386794 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386847 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jpjt\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-kube-api-access-8jpjt\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.386988 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.387034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ssh-key-openstack-edpm-ipam\") pod \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\" (UID: \"3a7db3fc-7bbd-447a-b46d-bea0ab214e38\") " Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.393649 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.393943 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-kube-api-access-8jpjt" (OuterVolumeSpecName: "kube-api-access-8jpjt") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "kube-api-access-8jpjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.394476 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.394543 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.396065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.396106 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.398005 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.398626 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.399146 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.401089 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.406057 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.408400 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.434689 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-inventory" (OuterVolumeSpecName: "inventory") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.435483 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3a7db3fc-7bbd-447a-b46d-bea0ab214e38" (UID: "3a7db3fc-7bbd-447a-b46d-bea0ab214e38"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490031 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490073 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490086 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jpjt\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-kube-api-access-8jpjt\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490099 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490109 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490122 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490144 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490158 4823 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490169 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490182 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490194 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490204 4823 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490214 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.490228 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7db3fc-7bbd-447a-b46d-bea0ab214e38-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.824592 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" event={"ID":"3a7db3fc-7bbd-447a-b46d-bea0ab214e38","Type":"ContainerDied","Data":"b83fe2402e9bfdcac78b54017d2cad24e4f2ee0e2b995d5a712510f03aa37e11"} Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.824987 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b83fe2402e9bfdcac78b54017d2cad24e4f2ee0e2b995d5a712510f03aa37e11" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.824713 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x5db2" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.920811 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk"] Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921225 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="extract-content" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921241 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="extract-content" Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921259 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="extract-content" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921267 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="extract-content" Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921287 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="registry-server" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921295 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="registry-server" Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921309 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7db3fc-7bbd-447a-b46d-bea0ab214e38" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921318 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7db3fc-7bbd-447a-b46d-bea0ab214e38" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921339 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="registry-server" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921348 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="registry-server" Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921363 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="extract-utilities" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921371 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="extract-utilities" Jan 21 17:54:10 crc kubenswrapper[4823]: E0121 17:54:10.921386 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="extract-utilities" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921392 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="extract-utilities" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921568 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="055182a6-5706-4701-b0ae-8d62f344a048" containerName="registry-server" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921590 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7db3fc-7bbd-447a-b46d-bea0ab214e38" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.921605 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee2ff03-6ecf-453e-94d7-7dd8fb59e3d5" containerName="registry-server" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.922498 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.926841 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.926987 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.927368 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.927486 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.930143 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:54:10 crc kubenswrapper[4823]: I0121 17:54:10.950915 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk"] Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.101607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.101756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.102105 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.102266 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.102308 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq5z\" (UniqueName: \"kubernetes.io/projected/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-kube-api-access-sbq5z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.203794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.203991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.204098 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.204143 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbq5z\" (UniqueName: \"kubernetes.io/projected/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-kube-api-access-sbq5z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.204276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.205485 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.208221 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.210089 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.219395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.231832 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbq5z\" (UniqueName: \"kubernetes.io/projected/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-kube-api-access-sbq5z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jd9kk\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.254406 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.777964 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk"] Jan 21 17:54:11 crc kubenswrapper[4823]: I0121 17:54:11.837736 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" event={"ID":"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c","Type":"ContainerStarted","Data":"66e26fdf9feeebd7cccd895b47f6a944064b3708586ccf6b9632630b7ef5c1c1"} Jan 21 17:54:12 crc kubenswrapper[4823]: I0121 17:54:12.848342 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" event={"ID":"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c","Type":"ContainerStarted","Data":"06fbea69b61f6ef708eb0ac8c0a0a656d8e428e0bb865334690771e5188bda8f"} Jan 21 17:54:12 crc kubenswrapper[4823]: I0121 17:54:12.871937 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" podStartSLOduration=2.265769716 podStartE2EDuration="2.871899509s" podCreationTimestamp="2026-01-21 17:54:10 +0000 UTC" firstStartedPulling="2026-01-21 17:54:11.77867119 +0000 UTC m=+2252.704802050" lastFinishedPulling="2026-01-21 17:54:12.384800983 +0000 UTC m=+2253.310931843" observedRunningTime="2026-01-21 17:54:12.869552551 +0000 UTC m=+2253.795683421" watchObservedRunningTime="2026-01-21 17:54:12.871899509 +0000 UTC m=+2253.798030359" Jan 21 17:54:21 crc kubenswrapper[4823]: I0121 17:54:21.343892 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:54:21 crc kubenswrapper[4823]: E0121 17:54:21.344715 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:54:33 crc kubenswrapper[4823]: I0121 17:54:33.343345 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:54:33 crc kubenswrapper[4823]: E0121 17:54:33.344136 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:54:47 crc kubenswrapper[4823]: I0121 17:54:47.349354 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:54:47 crc kubenswrapper[4823]: E0121 17:54:47.350081 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:54:58 crc kubenswrapper[4823]: I0121 17:54:58.344273 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:54:58 crc kubenswrapper[4823]: E0121 17:54:58.345311 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:55:09 crc kubenswrapper[4823]: I0121 17:55:09.349699 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:55:09 crc kubenswrapper[4823]: E0121 17:55:09.350532 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:55:21 crc kubenswrapper[4823]: I0121 17:55:21.344468 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:55:21 crc kubenswrapper[4823]: E0121 17:55:21.345903 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:55:25 crc kubenswrapper[4823]: I0121 17:55:25.552395 4823 generic.go:334] "Generic (PLEG): container finished" podID="dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" containerID="06fbea69b61f6ef708eb0ac8c0a0a656d8e428e0bb865334690771e5188bda8f" exitCode=0 Jan 21 17:55:25 crc kubenswrapper[4823]: I0121 17:55:25.552528 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" event={"ID":"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c","Type":"ContainerDied","Data":"06fbea69b61f6ef708eb0ac8c0a0a656d8e428e0bb865334690771e5188bda8f"} Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.001068 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.150765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovncontroller-config-0\") pod \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.150956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-inventory\") pod \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.151018 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbq5z\" (UniqueName: \"kubernetes.io/projected/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-kube-api-access-sbq5z\") pod \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.151108 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovn-combined-ca-bundle\") pod \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.151146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ssh-key-openstack-edpm-ipam\") pod \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\" (UID: \"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c\") " Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.158486 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-kube-api-access-sbq5z" (OuterVolumeSpecName: "kube-api-access-sbq5z") pod "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" (UID: "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c"). InnerVolumeSpecName "kube-api-access-sbq5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.158609 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" (UID: "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.186728 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-inventory" (OuterVolumeSpecName: "inventory") pod "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" (UID: "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.204425 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" (UID: "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.212897 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" (UID: "dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.253669 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.253709 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbq5z\" (UniqueName: \"kubernetes.io/projected/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-kube-api-access-sbq5z\") on node \"crc\" DevicePath \"\"" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.253723 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.253734 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.253747 4823 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.571642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" event={"ID":"dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c","Type":"ContainerDied","Data":"66e26fdf9feeebd7cccd895b47f6a944064b3708586ccf6b9632630b7ef5c1c1"} Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.571685 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e26fdf9feeebd7cccd895b47f6a944064b3708586ccf6b9632630b7ef5c1c1" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.571717 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jd9kk" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.660616 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt"] Jan 21 17:55:27 crc kubenswrapper[4823]: E0121 17:55:27.661100 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.661125 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.661366 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.662162 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.665963 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.665962 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.667700 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.668818 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.668962 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.670348 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.679123 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt"] Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.764002 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.764071 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.764583 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bpl\" (UniqueName: \"kubernetes.io/projected/07d18401-f812-4b0d-91fe-b330f237f2f8-kube-api-access-j2bpl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.764722 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.764776 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.764802 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.866548 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.866604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.866655 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.866681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.866784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2bpl\" (UniqueName: \"kubernetes.io/projected/07d18401-f812-4b0d-91fe-b330f237f2f8-kube-api-access-j2bpl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.866836 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.870726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.870726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.871003 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.871666 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.872723 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.884025 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2bpl\" (UniqueName: \"kubernetes.io/projected/07d18401-f812-4b0d-91fe-b330f237f2f8-kube-api-access-j2bpl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:27 crc kubenswrapper[4823]: I0121 17:55:27.984700 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:55:28 crc kubenswrapper[4823]: W0121 17:55:28.527217 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07d18401_f812_4b0d_91fe_b330f237f2f8.slice/crio-56a263a1fbd5961526d83e5ca6f6f1f48f2130098245ab17759b2cc6a0430aca WatchSource:0}: Error finding container 56a263a1fbd5961526d83e5ca6f6f1f48f2130098245ab17759b2cc6a0430aca: Status 404 returned error can't find the container with id 56a263a1fbd5961526d83e5ca6f6f1f48f2130098245ab17759b2cc6a0430aca Jan 21 17:55:28 crc kubenswrapper[4823]: I0121 17:55:28.531968 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt"] Jan 21 17:55:28 crc kubenswrapper[4823]: I0121 17:55:28.582579 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" event={"ID":"07d18401-f812-4b0d-91fe-b330f237f2f8","Type":"ContainerStarted","Data":"56a263a1fbd5961526d83e5ca6f6f1f48f2130098245ab17759b2cc6a0430aca"} Jan 21 17:55:29 crc kubenswrapper[4823]: I0121 17:55:29.591413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" event={"ID":"07d18401-f812-4b0d-91fe-b330f237f2f8","Type":"ContainerStarted","Data":"7c2fb23641559e863210683f98d115a421aa58bca2d60ce7bd7b6639cdfb2c10"} Jan 21 17:55:34 crc kubenswrapper[4823]: I0121 17:55:34.344758 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:55:34 crc kubenswrapper[4823]: E0121 17:55:34.345731 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:55:49 crc kubenswrapper[4823]: I0121 17:55:49.352467 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:55:49 crc kubenswrapper[4823]: E0121 17:55:49.353238 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.710786 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" podStartSLOduration=35.31195427 podStartE2EDuration="35.710766489s" podCreationTimestamp="2026-01-21 17:55:27 +0000 UTC" firstStartedPulling="2026-01-21 17:55:28.52975429 +0000 UTC m=+2329.455885150" lastFinishedPulling="2026-01-21 17:55:28.928566509 +0000 UTC m=+2329.854697369" observedRunningTime="2026-01-21 17:55:29.611187589 +0000 UTC m=+2330.537318449" watchObservedRunningTime="2026-01-21 17:56:02.710766489 +0000 UTC m=+2363.636897349" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.712601 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n8c8r"] Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.715057 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.728631 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n8c8r"] Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.802068 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-catalog-content\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.802490 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-utilities\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.802682 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxnjl\" (UniqueName: \"kubernetes.io/projected/c62397d4-3818-405c-abcb-ee7d106f68b8-kube-api-access-pxnjl\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.904830 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-catalog-content\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.905271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-utilities\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.905417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxnjl\" (UniqueName: \"kubernetes.io/projected/c62397d4-3818-405c-abcb-ee7d106f68b8-kube-api-access-pxnjl\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.906627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-catalog-content\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.906989 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-utilities\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:02 crc kubenswrapper[4823]: I0121 17:56:02.937032 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxnjl\" (UniqueName: \"kubernetes.io/projected/c62397d4-3818-405c-abcb-ee7d106f68b8-kube-api-access-pxnjl\") pod \"redhat-operators-n8c8r\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:03 crc kubenswrapper[4823]: I0121 17:56:03.076323 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:03 crc kubenswrapper[4823]: I0121 17:56:03.677003 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n8c8r"] Jan 21 17:56:03 crc kubenswrapper[4823]: I0121 17:56:03.952726 4823 generic.go:334] "Generic (PLEG): container finished" podID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerID="8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263" exitCode=0 Jan 21 17:56:03 crc kubenswrapper[4823]: I0121 17:56:03.952781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerDied","Data":"8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263"} Jan 21 17:56:03 crc kubenswrapper[4823]: I0121 17:56:03.952810 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerStarted","Data":"740a884cfe240a5429fb408901023932705fd9b1f62184c7478514678cb626a8"} Jan 21 17:56:04 crc kubenswrapper[4823]: I0121 17:56:04.343960 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:56:04 crc kubenswrapper[4823]: E0121 17:56:04.344466 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:56:04 crc kubenswrapper[4823]: I0121 17:56:04.963403 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerStarted","Data":"8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b"} Jan 21 17:56:07 crc kubenswrapper[4823]: I0121 17:56:07.996379 4823 generic.go:334] "Generic (PLEG): container finished" podID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerID="8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b" exitCode=0 Jan 21 17:56:07 crc kubenswrapper[4823]: I0121 17:56:07.997020 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerDied","Data":"8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b"} Jan 21 17:56:09 crc kubenswrapper[4823]: I0121 17:56:09.006747 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerStarted","Data":"2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27"} Jan 21 17:56:09 crc kubenswrapper[4823]: I0121 17:56:09.037930 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n8c8r" podStartSLOduration=2.421897102 podStartE2EDuration="7.037906987s" podCreationTimestamp="2026-01-21 17:56:02 +0000 UTC" firstStartedPulling="2026-01-21 17:56:03.954395449 +0000 UTC m=+2364.880526309" lastFinishedPulling="2026-01-21 17:56:08.570405334 +0000 UTC m=+2369.496536194" observedRunningTime="2026-01-21 17:56:09.025704666 +0000 UTC m=+2369.951835526" watchObservedRunningTime="2026-01-21 17:56:09.037906987 +0000 UTC m=+2369.964037847" Jan 21 17:56:13 crc kubenswrapper[4823]: I0121 17:56:13.077136 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:13 crc kubenswrapper[4823]: I0121 17:56:13.077492 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:14 crc kubenswrapper[4823]: I0121 17:56:14.139033 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n8c8r" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="registry-server" probeResult="failure" output=< Jan 21 17:56:14 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 21 17:56:14 crc kubenswrapper[4823]: > Jan 21 17:56:17 crc kubenswrapper[4823]: I0121 17:56:17.343973 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:56:17 crc kubenswrapper[4823]: E0121 17:56:17.345546 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:56:22 crc kubenswrapper[4823]: I0121 17:56:22.124748 4823 generic.go:334] "Generic (PLEG): container finished" podID="07d18401-f812-4b0d-91fe-b330f237f2f8" containerID="7c2fb23641559e863210683f98d115a421aa58bca2d60ce7bd7b6639cdfb2c10" exitCode=0 Jan 21 17:56:22 crc kubenswrapper[4823]: I0121 17:56:22.124838 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" event={"ID":"07d18401-f812-4b0d-91fe-b330f237f2f8","Type":"ContainerDied","Data":"7c2fb23641559e863210683f98d115a421aa58bca2d60ce7bd7b6639cdfb2c10"} Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.148737 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.200027 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.382905 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n8c8r"] Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.580845 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.655397 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.655564 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-nova-metadata-neutron-config-0\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.655616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-inventory\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.655681 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-metadata-combined-ca-bundle\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.655714 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2bpl\" (UniqueName: \"kubernetes.io/projected/07d18401-f812-4b0d-91fe-b330f237f2f8-kube-api-access-j2bpl\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.655733 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.661714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d18401-f812-4b0d-91fe-b330f237f2f8-kube-api-access-j2bpl" (OuterVolumeSpecName: "kube-api-access-j2bpl") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8"). InnerVolumeSpecName "kube-api-access-j2bpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.661747 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:56:23 crc kubenswrapper[4823]: E0121 17:56:23.685163 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam podName:07d18401-f812-4b0d-91fe-b330f237f2f8 nodeName:}" failed. No retries permitted until 2026-01-21 17:56:24.185135857 +0000 UTC m=+2385.111266717 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8") : error deleting /var/lib/kubelet/pods/07d18401-f812-4b0d-91fe-b330f237f2f8/volume-subpaths: remove /var/lib/kubelet/pods/07d18401-f812-4b0d-91fe-b330f237f2f8/volume-subpaths: no such file or directory Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.685483 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.685758 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.688699 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-inventory" (OuterVolumeSpecName: "inventory") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.758365 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.758410 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.758431 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2bpl\" (UniqueName: \"kubernetes.io/projected/07d18401-f812-4b0d-91fe-b330f237f2f8-kube-api-access-j2bpl\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.758452 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:23 crc kubenswrapper[4823]: I0121 17:56:23.758470 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.140694 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" event={"ID":"07d18401-f812-4b0d-91fe-b330f237f2f8","Type":"ContainerDied","Data":"56a263a1fbd5961526d83e5ca6f6f1f48f2130098245ab17759b2cc6a0430aca"} Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.140737 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.140756 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56a263a1fbd5961526d83e5ca6f6f1f48f2130098245ab17759b2cc6a0430aca" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.266986 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam\") pod \"07d18401-f812-4b0d-91fe-b330f237f2f8\" (UID: \"07d18401-f812-4b0d-91fe-b330f237f2f8\") " Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.277995 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07d18401-f812-4b0d-91fe-b330f237f2f8" (UID: "07d18401-f812-4b0d-91fe-b330f237f2f8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.320908 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96"] Jan 21 17:56:24 crc kubenswrapper[4823]: E0121 17:56:24.321371 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d18401-f812-4b0d-91fe-b330f237f2f8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.321388 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d18401-f812-4b0d-91fe-b330f237f2f8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.321609 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d18401-f812-4b0d-91fe-b330f237f2f8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.322308 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.325661 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.332109 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96"] Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.378891 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.378939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.379023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.379084 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbxf\" (UniqueName: \"kubernetes.io/projected/543a718f-39d9-4c35-bd9e-888f739b9726-kube-api-access-tdbxf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.379104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.379246 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07d18401-f812-4b0d-91fe-b330f237f2f8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.480583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.480676 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbxf\" (UniqueName: \"kubernetes.io/projected/543a718f-39d9-4c35-bd9e-888f739b9726-kube-api-access-tdbxf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.480802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.480897 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.480922 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.484328 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.484336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.485390 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.498788 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.501319 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbxf\" (UniqueName: \"kubernetes.io/projected/543a718f-39d9-4c35-bd9e-888f739b9726-kube-api-access-tdbxf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xbt96\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:24 crc kubenswrapper[4823]: I0121 17:56:24.693980 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.150372 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n8c8r" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="registry-server" containerID="cri-o://2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27" gracePeriod=2 Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.229743 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96"] Jan 21 17:56:25 crc kubenswrapper[4823]: W0121 17:56:25.287905 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod543a718f_39d9_4c35_bd9e_888f739b9726.slice/crio-767a6eaca3e6b162e73042a089b644eedbe097a8e77fab44ce1b53b72928c83a WatchSource:0}: Error finding container 767a6eaca3e6b162e73042a089b644eedbe097a8e77fab44ce1b53b72928c83a: Status 404 returned error can't find the container with id 767a6eaca3e6b162e73042a089b644eedbe097a8e77fab44ce1b53b72928c83a Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.599220 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.603450 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-utilities\") pod \"c62397d4-3818-405c-abcb-ee7d106f68b8\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.603683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxnjl\" (UniqueName: \"kubernetes.io/projected/c62397d4-3818-405c-abcb-ee7d106f68b8-kube-api-access-pxnjl\") pod \"c62397d4-3818-405c-abcb-ee7d106f68b8\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.603909 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-catalog-content\") pod \"c62397d4-3818-405c-abcb-ee7d106f68b8\" (UID: \"c62397d4-3818-405c-abcb-ee7d106f68b8\") " Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.606009 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-utilities" (OuterVolumeSpecName: "utilities") pod "c62397d4-3818-405c-abcb-ee7d106f68b8" (UID: "c62397d4-3818-405c-abcb-ee7d106f68b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.612391 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c62397d4-3818-405c-abcb-ee7d106f68b8-kube-api-access-pxnjl" (OuterVolumeSpecName: "kube-api-access-pxnjl") pod "c62397d4-3818-405c-abcb-ee7d106f68b8" (UID: "c62397d4-3818-405c-abcb-ee7d106f68b8"). InnerVolumeSpecName "kube-api-access-pxnjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.706128 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.706339 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxnjl\" (UniqueName: \"kubernetes.io/projected/c62397d4-3818-405c-abcb-ee7d106f68b8-kube-api-access-pxnjl\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.734353 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c62397d4-3818-405c-abcb-ee7d106f68b8" (UID: "c62397d4-3818-405c-abcb-ee7d106f68b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:56:25 crc kubenswrapper[4823]: I0121 17:56:25.807908 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62397d4-3818-405c-abcb-ee7d106f68b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.186963 4823 generic.go:334] "Generic (PLEG): container finished" podID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerID="2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27" exitCode=0 Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.187065 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n8c8r" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.187062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerDied","Data":"2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27"} Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.187598 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n8c8r" event={"ID":"c62397d4-3818-405c-abcb-ee7d106f68b8","Type":"ContainerDied","Data":"740a884cfe240a5429fb408901023932705fd9b1f62184c7478514678cb626a8"} Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.187621 4823 scope.go:117] "RemoveContainer" containerID="2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.190530 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" event={"ID":"543a718f-39d9-4c35-bd9e-888f739b9726","Type":"ContainerStarted","Data":"ad1e5448afd412ca5aa6f82a54f6431dad6957e97e309d313e55dd7f06961819"} Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.190675 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" event={"ID":"543a718f-39d9-4c35-bd9e-888f739b9726","Type":"ContainerStarted","Data":"767a6eaca3e6b162e73042a089b644eedbe097a8e77fab44ce1b53b72928c83a"} Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.218758 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" podStartSLOduration=1.769682461 podStartE2EDuration="2.218738849s" podCreationTimestamp="2026-01-21 17:56:24 +0000 UTC" firstStartedPulling="2026-01-21 17:56:25.290573512 +0000 UTC m=+2386.216704372" lastFinishedPulling="2026-01-21 17:56:25.7396299 +0000 UTC m=+2386.665760760" observedRunningTime="2026-01-21 17:56:26.211746427 +0000 UTC m=+2387.137877307" watchObservedRunningTime="2026-01-21 17:56:26.218738849 +0000 UTC m=+2387.144869709" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.233058 4823 scope.go:117] "RemoveContainer" containerID="8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.242792 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n8c8r"] Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.253001 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n8c8r"] Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.254044 4823 scope.go:117] "RemoveContainer" containerID="8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.270402 4823 scope.go:117] "RemoveContainer" containerID="2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27" Jan 21 17:56:26 crc kubenswrapper[4823]: E0121 17:56:26.270821 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27\": container with ID starting with 2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27 not found: ID does not exist" containerID="2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.270875 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27"} err="failed to get container status \"2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27\": rpc error: code = NotFound desc = could not find container \"2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27\": container with ID starting with 2afd80a39ef4d261003c9275eae2a41874f1670cb0038ece33fd451db7f8db27 not found: ID does not exist" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.270902 4823 scope.go:117] "RemoveContainer" containerID="8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b" Jan 21 17:56:26 crc kubenswrapper[4823]: E0121 17:56:26.271158 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b\": container with ID starting with 8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b not found: ID does not exist" containerID="8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.271179 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b"} err="failed to get container status \"8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b\": rpc error: code = NotFound desc = could not find container \"8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b\": container with ID starting with 8126ed9cac23e5d4de7748b9907d9d224d0228dc5b38ffe3f5f565b9a115ea9b not found: ID does not exist" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.271211 4823 scope.go:117] "RemoveContainer" containerID="8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263" Jan 21 17:56:26 crc kubenswrapper[4823]: E0121 17:56:26.271455 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263\": container with ID starting with 8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263 not found: ID does not exist" containerID="8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263" Jan 21 17:56:26 crc kubenswrapper[4823]: I0121 17:56:26.271513 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263"} err="failed to get container status \"8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263\": rpc error: code = NotFound desc = could not find container \"8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263\": container with ID starting with 8d22e020d78fe2dfde1b9b74e8830168003a5ba3e96be61976611a625f004263 not found: ID does not exist" Jan 21 17:56:27 crc kubenswrapper[4823]: I0121 17:56:27.356019 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" path="/var/lib/kubelet/pods/c62397d4-3818-405c-abcb-ee7d106f68b8/volumes" Jan 21 17:56:31 crc kubenswrapper[4823]: I0121 17:56:31.343948 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:56:31 crc kubenswrapper[4823]: E0121 17:56:31.344686 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:56:42 crc kubenswrapper[4823]: I0121 17:56:42.343640 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:56:42 crc kubenswrapper[4823]: E0121 17:56:42.344461 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:56:56 crc kubenswrapper[4823]: I0121 17:56:56.343698 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:56:56 crc kubenswrapper[4823]: E0121 17:56:56.345603 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:57:11 crc kubenswrapper[4823]: I0121 17:57:11.344977 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:57:11 crc kubenswrapper[4823]: E0121 17:57:11.345846 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:57:22 crc kubenswrapper[4823]: I0121 17:57:22.344479 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:57:22 crc kubenswrapper[4823]: E0121 17:57:22.345415 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:57:37 crc kubenswrapper[4823]: I0121 17:57:37.345405 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:57:37 crc kubenswrapper[4823]: E0121 17:57:37.346201 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:57:48 crc kubenswrapper[4823]: I0121 17:57:48.344500 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:57:48 crc kubenswrapper[4823]: E0121 17:57:48.346034 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:57:59 crc kubenswrapper[4823]: I0121 17:57:59.343535 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:57:59 crc kubenswrapper[4823]: E0121 17:57:59.344279 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:58:12 crc kubenswrapper[4823]: I0121 17:58:12.345217 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:58:12 crc kubenswrapper[4823]: E0121 17:58:12.346738 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 17:58:23 crc kubenswrapper[4823]: I0121 17:58:23.344267 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 17:58:24 crc kubenswrapper[4823]: I0121 17:58:24.462712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"981166277ef96120a45297ce5b5c3d23d592c823e623793778dddeac0620c8d0"} Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.151946 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h"] Jan 21 18:00:00 crc kubenswrapper[4823]: E0121 18:00:00.152928 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="extract-utilities" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.152942 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="extract-utilities" Jan 21 18:00:00 crc kubenswrapper[4823]: E0121 18:00:00.152970 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="extract-content" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.152976 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="extract-content" Jan 21 18:00:00 crc kubenswrapper[4823]: E0121 18:00:00.152992 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="registry-server" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.152999 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="registry-server" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.153198 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62397d4-3818-405c-abcb-ee7d106f68b8" containerName="registry-server" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.153954 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.155781 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.156191 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.163058 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h"] Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.249585 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-565fw\" (UniqueName: \"kubernetes.io/projected/0208cb78-6a49-4a77-bba2-d269d5c015a0-kube-api-access-565fw\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.250033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0208cb78-6a49-4a77-bba2-d269d5c015a0-secret-volume\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.250097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0208cb78-6a49-4a77-bba2-d269d5c015a0-config-volume\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.352275 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0208cb78-6a49-4a77-bba2-d269d5c015a0-config-volume\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.352376 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-565fw\" (UniqueName: \"kubernetes.io/projected/0208cb78-6a49-4a77-bba2-d269d5c015a0-kube-api-access-565fw\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.352461 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0208cb78-6a49-4a77-bba2-d269d5c015a0-secret-volume\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.353266 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0208cb78-6a49-4a77-bba2-d269d5c015a0-config-volume\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.357850 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0208cb78-6a49-4a77-bba2-d269d5c015a0-secret-volume\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.367804 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-565fw\" (UniqueName: \"kubernetes.io/projected/0208cb78-6a49-4a77-bba2-d269d5c015a0-kube-api-access-565fw\") pod \"collect-profiles-29483640-hr25h\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.487799 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:00 crc kubenswrapper[4823]: I0121 18:00:00.970042 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h"] Jan 21 18:00:01 crc kubenswrapper[4823]: I0121 18:00:01.097322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" event={"ID":"0208cb78-6a49-4a77-bba2-d269d5c015a0","Type":"ContainerStarted","Data":"ce0d77014efca420d694b47402396463c492823e6dd1c2d5523d8dc85229b9a2"} Jan 21 18:00:02 crc kubenswrapper[4823]: I0121 18:00:02.108199 4823 generic.go:334] "Generic (PLEG): container finished" podID="0208cb78-6a49-4a77-bba2-d269d5c015a0" containerID="31c24add75e28df7d6839888f9b1297e8febc39ff905b636cc131523f1ec2e78" exitCode=0 Jan 21 18:00:02 crc kubenswrapper[4823]: I0121 18:00:02.108304 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" event={"ID":"0208cb78-6a49-4a77-bba2-d269d5c015a0","Type":"ContainerDied","Data":"31c24add75e28df7d6839888f9b1297e8febc39ff905b636cc131523f1ec2e78"} Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.458162 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.637171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0208cb78-6a49-4a77-bba2-d269d5c015a0-config-volume\") pod \"0208cb78-6a49-4a77-bba2-d269d5c015a0\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.637272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0208cb78-6a49-4a77-bba2-d269d5c015a0-secret-volume\") pod \"0208cb78-6a49-4a77-bba2-d269d5c015a0\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.637418 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-565fw\" (UniqueName: \"kubernetes.io/projected/0208cb78-6a49-4a77-bba2-d269d5c015a0-kube-api-access-565fw\") pod \"0208cb78-6a49-4a77-bba2-d269d5c015a0\" (UID: \"0208cb78-6a49-4a77-bba2-d269d5c015a0\") " Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.638493 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0208cb78-6a49-4a77-bba2-d269d5c015a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "0208cb78-6a49-4a77-bba2-d269d5c015a0" (UID: "0208cb78-6a49-4a77-bba2-d269d5c015a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.643635 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0208cb78-6a49-4a77-bba2-d269d5c015a0-kube-api-access-565fw" (OuterVolumeSpecName: "kube-api-access-565fw") pod "0208cb78-6a49-4a77-bba2-d269d5c015a0" (UID: "0208cb78-6a49-4a77-bba2-d269d5c015a0"). InnerVolumeSpecName "kube-api-access-565fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.643820 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0208cb78-6a49-4a77-bba2-d269d5c015a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0208cb78-6a49-4a77-bba2-d269d5c015a0" (UID: "0208cb78-6a49-4a77-bba2-d269d5c015a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.739977 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0208cb78-6a49-4a77-bba2-d269d5c015a0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.740039 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0208cb78-6a49-4a77-bba2-d269d5c015a0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:00:03 crc kubenswrapper[4823]: I0121 18:00:03.740061 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-565fw\" (UniqueName: \"kubernetes.io/projected/0208cb78-6a49-4a77-bba2-d269d5c015a0-kube-api-access-565fw\") on node \"crc\" DevicePath \"\"" Jan 21 18:00:04 crc kubenswrapper[4823]: I0121 18:00:04.128551 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" event={"ID":"0208cb78-6a49-4a77-bba2-d269d5c015a0","Type":"ContainerDied","Data":"ce0d77014efca420d694b47402396463c492823e6dd1c2d5523d8dc85229b9a2"} Jan 21 18:00:04 crc kubenswrapper[4823]: I0121 18:00:04.128587 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce0d77014efca420d694b47402396463c492823e6dd1c2d5523d8dc85229b9a2" Jan 21 18:00:04 crc kubenswrapper[4823]: I0121 18:00:04.128623 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h" Jan 21 18:00:04 crc kubenswrapper[4823]: I0121 18:00:04.531194 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx"] Jan 21 18:00:04 crc kubenswrapper[4823]: I0121 18:00:04.539406 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-ksvrx"] Jan 21 18:00:05 crc kubenswrapper[4823]: I0121 18:00:05.375749 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450e2dbe-320e-45fa-8122-26b905dfb601" path="/var/lib/kubelet/pods/450e2dbe-320e-45fa-8122-26b905dfb601/volumes" Jan 21 18:00:29 crc kubenswrapper[4823]: I0121 18:00:29.315017 4823 scope.go:117] "RemoveContainer" containerID="9d28086d8468045fb3b611d6356708273715a871ced198e81c48672383a1e4d7" Jan 21 18:00:45 crc kubenswrapper[4823]: I0121 18:00:45.071178 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:00:45 crc kubenswrapper[4823]: I0121 18:00:45.071836 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.160982 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483641-vd8l7"] Jan 21 18:01:00 crc kubenswrapper[4823]: E0121 18:01:00.162017 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0208cb78-6a49-4a77-bba2-d269d5c015a0" containerName="collect-profiles" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.162034 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0208cb78-6a49-4a77-bba2-d269d5c015a0" containerName="collect-profiles" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.162284 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0208cb78-6a49-4a77-bba2-d269d5c015a0" containerName="collect-profiles" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.163102 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.174567 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483641-vd8l7"] Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.363293 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-config-data\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.364177 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-fernet-keys\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.364576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-kube-api-access-f7kpg\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.365274 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-combined-ca-bundle\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.467572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-combined-ca-bundle\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.467958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-config-data\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.468156 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-fernet-keys\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.468339 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-kube-api-access-f7kpg\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.474203 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-fernet-keys\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.474657 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-config-data\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.475590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-combined-ca-bundle\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.486461 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-kube-api-access-f7kpg\") pod \"keystone-cron-29483641-vd8l7\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:00 crc kubenswrapper[4823]: I0121 18:01:00.496925 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:01 crc kubenswrapper[4823]: I0121 18:01:01.034840 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483641-vd8l7"] Jan 21 18:01:01 crc kubenswrapper[4823]: I0121 18:01:01.724047 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483641-vd8l7" event={"ID":"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb","Type":"ContainerStarted","Data":"d6ff77c7d4f12903fa756e0f61182a6f0de9717eb7247751392da01cea435d7a"} Jan 21 18:01:01 crc kubenswrapper[4823]: I0121 18:01:01.724530 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483641-vd8l7" event={"ID":"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb","Type":"ContainerStarted","Data":"8c3d2fb014e96e829f99897a3fa04efdc3e4eacbb89dce94bf9cdba3d38f0645"} Jan 21 18:01:01 crc kubenswrapper[4823]: I0121 18:01:01.751929 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483641-vd8l7" podStartSLOduration=1.751906881 podStartE2EDuration="1.751906881s" podCreationTimestamp="2026-01-21 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:01:01.744785295 +0000 UTC m=+2662.670916155" watchObservedRunningTime="2026-01-21 18:01:01.751906881 +0000 UTC m=+2662.678037761" Jan 21 18:01:03 crc kubenswrapper[4823]: I0121 18:01:03.742931 4823 generic.go:334] "Generic (PLEG): container finished" podID="832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" containerID="d6ff77c7d4f12903fa756e0f61182a6f0de9717eb7247751392da01cea435d7a" exitCode=0 Jan 21 18:01:03 crc kubenswrapper[4823]: I0121 18:01:03.743025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483641-vd8l7" event={"ID":"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb","Type":"ContainerDied","Data":"d6ff77c7d4f12903fa756e0f61182a6f0de9717eb7247751392da01cea435d7a"} Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.138063 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.169795 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-config-data\") pod \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.169890 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-fernet-keys\") pod \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.170118 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-kube-api-access-f7kpg\") pod \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.170211 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-combined-ca-bundle\") pod \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\" (UID: \"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb\") " Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.184089 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" (UID: "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.184649 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-kube-api-access-f7kpg" (OuterVolumeSpecName: "kube-api-access-f7kpg") pod "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" (UID: "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb"). InnerVolumeSpecName "kube-api-access-f7kpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.201357 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" (UID: "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.236708 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-config-data" (OuterVolumeSpecName: "config-data") pod "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" (UID: "832c4531-1b16-4ef3-b1b4-cb89dbe48cfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.272613 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-kube-api-access-f7kpg\") on node \"crc\" DevicePath \"\"" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.272656 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.272668 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.272679 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/832c4531-1b16-4ef3-b1b4-cb89dbe48cfb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.762559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483641-vd8l7" event={"ID":"832c4531-1b16-4ef3-b1b4-cb89dbe48cfb","Type":"ContainerDied","Data":"8c3d2fb014e96e829f99897a3fa04efdc3e4eacbb89dce94bf9cdba3d38f0645"} Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.762597 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c3d2fb014e96e829f99897a3fa04efdc3e4eacbb89dce94bf9cdba3d38f0645" Jan 21 18:01:05 crc kubenswrapper[4823]: I0121 18:01:05.762905 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483641-vd8l7" Jan 21 18:01:15 crc kubenswrapper[4823]: I0121 18:01:15.070349 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:01:15 crc kubenswrapper[4823]: I0121 18:01:15.070841 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:01:35 crc kubenswrapper[4823]: I0121 18:01:35.759661 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-76d5f7bd8c-dgmn6" podUID="6b9e2d6c-5e93-426c-8c47-478d9ad360ed" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 21 18:01:45 crc kubenswrapper[4823]: I0121 18:01:45.070740 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:01:45 crc kubenswrapper[4823]: I0121 18:01:45.073549 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:01:45 crc kubenswrapper[4823]: I0121 18:01:45.073617 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:01:45 crc kubenswrapper[4823]: I0121 18:01:45.074390 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"981166277ef96120a45297ce5b5c3d23d592c823e623793778dddeac0620c8d0"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:01:45 crc kubenswrapper[4823]: I0121 18:01:45.074445 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://981166277ef96120a45297ce5b5c3d23d592c823e623793778dddeac0620c8d0" gracePeriod=600 Jan 21 18:01:46 crc kubenswrapper[4823]: I0121 18:01:46.157211 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="981166277ef96120a45297ce5b5c3d23d592c823e623793778dddeac0620c8d0" exitCode=0 Jan 21 18:01:46 crc kubenswrapper[4823]: I0121 18:01:46.157307 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"981166277ef96120a45297ce5b5c3d23d592c823e623793778dddeac0620c8d0"} Jan 21 18:01:46 crc kubenswrapper[4823]: I0121 18:01:46.157815 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48"} Jan 21 18:01:46 crc kubenswrapper[4823]: I0121 18:01:46.157841 4823 scope.go:117] "RemoveContainer" containerID="84644d3aee20b3068c1f8d32703dde4c17a6f2665fb613cad7e1b6e981f46bce" Jan 21 18:02:33 crc kubenswrapper[4823]: I0121 18:02:33.588548 4823 generic.go:334] "Generic (PLEG): container finished" podID="543a718f-39d9-4c35-bd9e-888f739b9726" containerID="ad1e5448afd412ca5aa6f82a54f6431dad6957e97e309d313e55dd7f06961819" exitCode=0 Jan 21 18:02:33 crc kubenswrapper[4823]: I0121 18:02:33.588793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" event={"ID":"543a718f-39d9-4c35-bd9e-888f739b9726","Type":"ContainerDied","Data":"ad1e5448afd412ca5aa6f82a54f6431dad6957e97e309d313e55dd7f06961819"} Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.146482 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.157753 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-secret-0\") pod \"543a718f-39d9-4c35-bd9e-888f739b9726\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.157826 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-combined-ca-bundle\") pod \"543a718f-39d9-4c35-bd9e-888f739b9726\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.157928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-inventory\") pod \"543a718f-39d9-4c35-bd9e-888f739b9726\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.157996 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdbxf\" (UniqueName: \"kubernetes.io/projected/543a718f-39d9-4c35-bd9e-888f739b9726-kube-api-access-tdbxf\") pod \"543a718f-39d9-4c35-bd9e-888f739b9726\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.158032 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-ssh-key-openstack-edpm-ipam\") pod \"543a718f-39d9-4c35-bd9e-888f739b9726\" (UID: \"543a718f-39d9-4c35-bd9e-888f739b9726\") " Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.168178 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543a718f-39d9-4c35-bd9e-888f739b9726-kube-api-access-tdbxf" (OuterVolumeSpecName: "kube-api-access-tdbxf") pod "543a718f-39d9-4c35-bd9e-888f739b9726" (UID: "543a718f-39d9-4c35-bd9e-888f739b9726"). InnerVolumeSpecName "kube-api-access-tdbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.180904 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "543a718f-39d9-4c35-bd9e-888f739b9726" (UID: "543a718f-39d9-4c35-bd9e-888f739b9726"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.194063 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-inventory" (OuterVolumeSpecName: "inventory") pod "543a718f-39d9-4c35-bd9e-888f739b9726" (UID: "543a718f-39d9-4c35-bd9e-888f739b9726"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.214123 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "543a718f-39d9-4c35-bd9e-888f739b9726" (UID: "543a718f-39d9-4c35-bd9e-888f739b9726"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.222153 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "543a718f-39d9-4c35-bd9e-888f739b9726" (UID: "543a718f-39d9-4c35-bd9e-888f739b9726"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.260149 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.260188 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.260199 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.260208 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/543a718f-39d9-4c35-bd9e-888f739b9726-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.260218 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdbxf\" (UniqueName: \"kubernetes.io/projected/543a718f-39d9-4c35-bd9e-888f739b9726-kube-api-access-tdbxf\") on node \"crc\" DevicePath \"\"" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.614259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" event={"ID":"543a718f-39d9-4c35-bd9e-888f739b9726","Type":"ContainerDied","Data":"767a6eaca3e6b162e73042a089b644eedbe097a8e77fab44ce1b53b72928c83a"} Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.614310 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xbt96" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.614314 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="767a6eaca3e6b162e73042a089b644eedbe097a8e77fab44ce1b53b72928c83a" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.717288 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc"] Jan 21 18:02:35 crc kubenswrapper[4823]: E0121 18:02:35.717838 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" containerName="keystone-cron" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.720100 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" containerName="keystone-cron" Jan 21 18:02:35 crc kubenswrapper[4823]: E0121 18:02:35.720157 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543a718f-39d9-4c35-bd9e-888f739b9726" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.720167 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="543a718f-39d9-4c35-bd9e-888f739b9726" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.720593 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="832c4531-1b16-4ef3-b1b4-cb89dbe48cfb" containerName="keystone-cron" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.720627 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="543a718f-39d9-4c35-bd9e-888f739b9726" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.721397 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.724381 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.724826 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.724893 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.725134 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.725328 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.725979 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.729419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.750430 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc"] Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.874909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.875689 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.875732 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.875771 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.876371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.876573 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ff90f069-d94f-4af4-958c-4e43099fe702-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.876721 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.876771 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.876824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlvdz\" (UniqueName: \"kubernetes.io/projected/ff90f069-d94f-4af4-958c-4e43099fe702-kube-api-access-nlvdz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980094 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980166 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ff90f069-d94f-4af4-958c-4e43099fe702-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980207 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980240 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlvdz\" (UniqueName: \"kubernetes.io/projected/ff90f069-d94f-4af4-958c-4e43099fe702-kube-api-access-nlvdz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980394 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980448 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980477 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.980512 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.981388 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ff90f069-d94f-4af4-958c-4e43099fe702-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.987902 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.988151 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.988517 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.988583 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.989586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.990020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.994419 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:35 crc kubenswrapper[4823]: I0121 18:02:35.999168 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlvdz\" (UniqueName: \"kubernetes.io/projected/ff90f069-d94f-4af4-958c-4e43099fe702-kube-api-access-nlvdz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sfkrc\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:36 crc kubenswrapper[4823]: I0121 18:02:36.054171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:02:36 crc kubenswrapper[4823]: I0121 18:02:36.651233 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc"] Jan 21 18:02:36 crc kubenswrapper[4823]: I0121 18:02:36.662872 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:02:37 crc kubenswrapper[4823]: I0121 18:02:37.640352 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" event={"ID":"ff90f069-d94f-4af4-958c-4e43099fe702","Type":"ContainerStarted","Data":"e1acae069ad296a76eacb837a2a28623bcc3f07ba4c52732000aa93fcfcb3258"} Jan 21 18:02:38 crc kubenswrapper[4823]: I0121 18:02:38.657281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" event={"ID":"ff90f069-d94f-4af4-958c-4e43099fe702","Type":"ContainerStarted","Data":"c1a59e24c9d160a0192824fa48944d7954ab46d1507ff2674f135923b81eb4fb"} Jan 21 18:02:38 crc kubenswrapper[4823]: I0121 18:02:38.681268 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" podStartSLOduration=3.182404135 podStartE2EDuration="3.681252007s" podCreationTimestamp="2026-01-21 18:02:35 +0000 UTC" firstStartedPulling="2026-01-21 18:02:36.662539598 +0000 UTC m=+2757.588670458" lastFinishedPulling="2026-01-21 18:02:37.16138746 +0000 UTC m=+2758.087518330" observedRunningTime="2026-01-21 18:02:38.678318864 +0000 UTC m=+2759.604449724" watchObservedRunningTime="2026-01-21 18:02:38.681252007 +0000 UTC m=+2759.607382857" Jan 21 18:04:15 crc kubenswrapper[4823]: I0121 18:04:15.101062 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:04:15 crc kubenswrapper[4823]: I0121 18:04:15.101937 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.408008 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lr55l"] Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.410374 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.429420 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lr55l"] Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.472352 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-utilities\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.472461 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95t4n\" (UniqueName: \"kubernetes.io/projected/bed3d62b-382a-44f1-9648-e8c984d03b5a-kube-api-access-95t4n\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.472496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-catalog-content\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.574872 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-utilities\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.575195 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95t4n\" (UniqueName: \"kubernetes.io/projected/bed3d62b-382a-44f1-9648-e8c984d03b5a-kube-api-access-95t4n\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.575306 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-catalog-content\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.575444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-utilities\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.575669 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-catalog-content\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.595454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95t4n\" (UniqueName: \"kubernetes.io/projected/bed3d62b-382a-44f1-9648-e8c984d03b5a-kube-api-access-95t4n\") pod \"community-operators-lr55l\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:26 crc kubenswrapper[4823]: I0121 18:04:26.733392 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:27 crc kubenswrapper[4823]: I0121 18:04:27.278233 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lr55l"] Jan 21 18:04:27 crc kubenswrapper[4823]: I0121 18:04:27.847323 4823 generic.go:334] "Generic (PLEG): container finished" podID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerID="11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442" exitCode=0 Jan 21 18:04:27 crc kubenswrapper[4823]: I0121 18:04:27.847403 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerDied","Data":"11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442"} Jan 21 18:04:27 crc kubenswrapper[4823]: I0121 18:04:27.847672 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerStarted","Data":"f5c5d1cdc385da8e7c13332abafb850f4d130a69c97f5a6df13dac73c0262438"} Jan 21 18:04:28 crc kubenswrapper[4823]: I0121 18:04:28.859544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerStarted","Data":"060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc"} Jan 21 18:04:29 crc kubenswrapper[4823]: I0121 18:04:29.875078 4823 generic.go:334] "Generic (PLEG): container finished" podID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerID="060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc" exitCode=0 Jan 21 18:04:29 crc kubenswrapper[4823]: I0121 18:04:29.875141 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerDied","Data":"060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc"} Jan 21 18:04:30 crc kubenswrapper[4823]: I0121 18:04:30.887341 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerStarted","Data":"8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9"} Jan 21 18:04:30 crc kubenswrapper[4823]: I0121 18:04:30.914799 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lr55l" podStartSLOduration=2.2735127090000002 podStartE2EDuration="4.914772881s" podCreationTimestamp="2026-01-21 18:04:26 +0000 UTC" firstStartedPulling="2026-01-21 18:04:27.849033544 +0000 UTC m=+2868.775164404" lastFinishedPulling="2026-01-21 18:04:30.490293716 +0000 UTC m=+2871.416424576" observedRunningTime="2026-01-21 18:04:30.907250565 +0000 UTC m=+2871.833381445" watchObservedRunningTime="2026-01-21 18:04:30.914772881 +0000 UTC m=+2871.840903751" Jan 21 18:04:36 crc kubenswrapper[4823]: I0121 18:04:36.733909 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:36 crc kubenswrapper[4823]: I0121 18:04:36.734482 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:36 crc kubenswrapper[4823]: I0121 18:04:36.795547 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:36 crc kubenswrapper[4823]: I0121 18:04:36.990395 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:37 crc kubenswrapper[4823]: I0121 18:04:37.044462 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lr55l"] Jan 21 18:04:38 crc kubenswrapper[4823]: I0121 18:04:38.960170 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lr55l" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="registry-server" containerID="cri-o://8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9" gracePeriod=2 Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.414040 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.446948 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95t4n\" (UniqueName: \"kubernetes.io/projected/bed3d62b-382a-44f1-9648-e8c984d03b5a-kube-api-access-95t4n\") pod \"bed3d62b-382a-44f1-9648-e8c984d03b5a\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.447364 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-utilities\") pod \"bed3d62b-382a-44f1-9648-e8c984d03b5a\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.447492 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-catalog-content\") pod \"bed3d62b-382a-44f1-9648-e8c984d03b5a\" (UID: \"bed3d62b-382a-44f1-9648-e8c984d03b5a\") " Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.458985 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed3d62b-382a-44f1-9648-e8c984d03b5a-kube-api-access-95t4n" (OuterVolumeSpecName: "kube-api-access-95t4n") pod "bed3d62b-382a-44f1-9648-e8c984d03b5a" (UID: "bed3d62b-382a-44f1-9648-e8c984d03b5a"). InnerVolumeSpecName "kube-api-access-95t4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.470826 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-utilities" (OuterVolumeSpecName: "utilities") pod "bed3d62b-382a-44f1-9648-e8c984d03b5a" (UID: "bed3d62b-382a-44f1-9648-e8c984d03b5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.521575 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bed3d62b-382a-44f1-9648-e8c984d03b5a" (UID: "bed3d62b-382a-44f1-9648-e8c984d03b5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.549417 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.549457 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bed3d62b-382a-44f1-9648-e8c984d03b5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.549491 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95t4n\" (UniqueName: \"kubernetes.io/projected/bed3d62b-382a-44f1-9648-e8c984d03b5a-kube-api-access-95t4n\") on node \"crc\" DevicePath \"\"" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.968787 4823 generic.go:334] "Generic (PLEG): container finished" podID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerID="8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9" exitCode=0 Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.968830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerDied","Data":"8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9"} Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.968868 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lr55l" event={"ID":"bed3d62b-382a-44f1-9648-e8c984d03b5a","Type":"ContainerDied","Data":"f5c5d1cdc385da8e7c13332abafb850f4d130a69c97f5a6df13dac73c0262438"} Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.968884 4823 scope.go:117] "RemoveContainer" containerID="8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.969004 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lr55l" Jan 21 18:04:39 crc kubenswrapper[4823]: I0121 18:04:39.998593 4823 scope.go:117] "RemoveContainer" containerID="060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.016075 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lr55l"] Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.025998 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lr55l"] Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.028514 4823 scope.go:117] "RemoveContainer" containerID="11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.084047 4823 scope.go:117] "RemoveContainer" containerID="8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9" Jan 21 18:04:40 crc kubenswrapper[4823]: E0121 18:04:40.085280 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9\": container with ID starting with 8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9 not found: ID does not exist" containerID="8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.085324 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9"} err="failed to get container status \"8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9\": rpc error: code = NotFound desc = could not find container \"8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9\": container with ID starting with 8c5ed30fcfa2937c9fbd024405a4537f99f2b66c47a2a224953ab133a20434a9 not found: ID does not exist" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.085343 4823 scope.go:117] "RemoveContainer" containerID="060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc" Jan 21 18:04:40 crc kubenswrapper[4823]: E0121 18:04:40.085653 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc\": container with ID starting with 060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc not found: ID does not exist" containerID="060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.085678 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc"} err="failed to get container status \"060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc\": rpc error: code = NotFound desc = could not find container \"060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc\": container with ID starting with 060638e0f085614c7413bf76965e7d08e33587d5840cc203b71cd5495121c9bc not found: ID does not exist" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.085689 4823 scope.go:117] "RemoveContainer" containerID="11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442" Jan 21 18:04:40 crc kubenswrapper[4823]: E0121 18:04:40.085910 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442\": container with ID starting with 11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442 not found: ID does not exist" containerID="11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442" Jan 21 18:04:40 crc kubenswrapper[4823]: I0121 18:04:40.085939 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442"} err="failed to get container status \"11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442\": rpc error: code = NotFound desc = could not find container \"11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442\": container with ID starting with 11d8bf3ab6cc08d2dddf982d167a7ff90b16c135217ae4b1611b894350b3b442 not found: ID does not exist" Jan 21 18:04:41 crc kubenswrapper[4823]: I0121 18:04:41.357445 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" path="/var/lib/kubelet/pods/bed3d62b-382a-44f1-9648-e8c984d03b5a/volumes" Jan 21 18:04:45 crc kubenswrapper[4823]: I0121 18:04:45.070589 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:04:45 crc kubenswrapper[4823]: I0121 18:04:45.071014 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.786276 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s28bz"] Jan 21 18:04:47 crc kubenswrapper[4823]: E0121 18:04:47.787243 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="registry-server" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.787259 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="registry-server" Jan 21 18:04:47 crc kubenswrapper[4823]: E0121 18:04:47.787287 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="extract-utilities" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.787294 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="extract-utilities" Jan 21 18:04:47 crc kubenswrapper[4823]: E0121 18:04:47.787331 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="extract-content" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.787339 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="extract-content" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.787577 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed3d62b-382a-44f1-9648-e8c984d03b5a" containerName="registry-server" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.789246 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.802126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s28bz"] Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.909601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-utilities\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.909661 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-catalog-content\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:47 crc kubenswrapper[4823]: I0121 18:04:47.909695 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twqfs\" (UniqueName: \"kubernetes.io/projected/28a3febc-8c77-4a7b-b994-43866774ef8e-kube-api-access-twqfs\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.011679 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-utilities\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.011745 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-catalog-content\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.011777 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twqfs\" (UniqueName: \"kubernetes.io/projected/28a3febc-8c77-4a7b-b994-43866774ef8e-kube-api-access-twqfs\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.012250 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-catalog-content\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.012359 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-utilities\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.036750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twqfs\" (UniqueName: \"kubernetes.io/projected/28a3febc-8c77-4a7b-b994-43866774ef8e-kube-api-access-twqfs\") pod \"certified-operators-s28bz\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.116675 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:48 crc kubenswrapper[4823]: I0121 18:04:48.604445 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s28bz"] Jan 21 18:04:49 crc kubenswrapper[4823]: I0121 18:04:49.062965 4823 generic.go:334] "Generic (PLEG): container finished" podID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerID="4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9" exitCode=0 Jan 21 18:04:49 crc kubenswrapper[4823]: I0121 18:04:49.063230 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerDied","Data":"4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9"} Jan 21 18:04:49 crc kubenswrapper[4823]: I0121 18:04:49.064690 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerStarted","Data":"623a091ca0eee554c1a704e97e607ac527ceb797f1bb1f348db290e35e30b4dd"} Jan 21 18:04:50 crc kubenswrapper[4823]: I0121 18:04:50.077655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerStarted","Data":"257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768"} Jan 21 18:04:51 crc kubenswrapper[4823]: I0121 18:04:51.115838 4823 generic.go:334] "Generic (PLEG): container finished" podID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerID="257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768" exitCode=0 Jan 21 18:04:51 crc kubenswrapper[4823]: I0121 18:04:51.116263 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerDied","Data":"257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768"} Jan 21 18:04:52 crc kubenswrapper[4823]: I0121 18:04:52.126708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerStarted","Data":"2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241"} Jan 21 18:04:52 crc kubenswrapper[4823]: I0121 18:04:52.158975 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s28bz" podStartSLOduration=2.623041309 podStartE2EDuration="5.158951095s" podCreationTimestamp="2026-01-21 18:04:47 +0000 UTC" firstStartedPulling="2026-01-21 18:04:49.067497614 +0000 UTC m=+2889.993628484" lastFinishedPulling="2026-01-21 18:04:51.60340741 +0000 UTC m=+2892.529538270" observedRunningTime="2026-01-21 18:04:52.155027857 +0000 UTC m=+2893.081158737" watchObservedRunningTime="2026-01-21 18:04:52.158951095 +0000 UTC m=+2893.085081965" Jan 21 18:04:58 crc kubenswrapper[4823]: I0121 18:04:58.271997 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:58 crc kubenswrapper[4823]: I0121 18:04:58.272562 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:58 crc kubenswrapper[4823]: I0121 18:04:58.364374 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:59 crc kubenswrapper[4823]: I0121 18:04:59.362302 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:04:59 crc kubenswrapper[4823]: I0121 18:04:59.420243 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s28bz"] Jan 21 18:05:00 crc kubenswrapper[4823]: I0121 18:05:00.305981 4823 generic.go:334] "Generic (PLEG): container finished" podID="ff90f069-d94f-4af4-958c-4e43099fe702" containerID="c1a59e24c9d160a0192824fa48944d7954ab46d1507ff2674f135923b81eb4fb" exitCode=0 Jan 21 18:05:00 crc kubenswrapper[4823]: I0121 18:05:00.306932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" event={"ID":"ff90f069-d94f-4af4-958c-4e43099fe702","Type":"ContainerDied","Data":"c1a59e24c9d160a0192824fa48944d7954ab46d1507ff2674f135923b81eb4fb"} Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.314478 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s28bz" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="registry-server" containerID="cri-o://2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241" gracePeriod=2 Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.857189 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.866063 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-inventory\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922392 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-1\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-0\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlvdz\" (UniqueName: \"kubernetes.io/projected/ff90f069-d94f-4af4-958c-4e43099fe702-kube-api-access-nlvdz\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922497 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-ssh-key-openstack-edpm-ipam\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922523 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-0\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922582 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-1\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922633 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-combined-ca-bundle\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.922730 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ff90f069-d94f-4af4-958c-4e43099fe702-nova-extra-config-0\") pod \"ff90f069-d94f-4af4-958c-4e43099fe702\" (UID: \"ff90f069-d94f-4af4-958c-4e43099fe702\") " Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.953193 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff90f069-d94f-4af4-958c-4e43099fe702-kube-api-access-nlvdz" (OuterVolumeSpecName: "kube-api-access-nlvdz") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "kube-api-access-nlvdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.958841 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.962875 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.968408 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.972065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.983018 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff90f069-d94f-4af4-958c-4e43099fe702-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.986601 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.988293 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-inventory" (OuterVolumeSpecName: "inventory") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:01 crc kubenswrapper[4823]: I0121 18:05:01.991126 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ff90f069-d94f-4af4-958c-4e43099fe702" (UID: "ff90f069-d94f-4af4-958c-4e43099fe702"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.024431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-utilities\") pod \"28a3febc-8c77-4a7b-b994-43866774ef8e\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.024476 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-catalog-content\") pod \"28a3febc-8c77-4a7b-b994-43866774ef8e\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.024702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twqfs\" (UniqueName: \"kubernetes.io/projected/28a3febc-8c77-4a7b-b994-43866774ef8e-kube-api-access-twqfs\") pod \"28a3febc-8c77-4a7b-b994-43866774ef8e\" (UID: \"28a3febc-8c77-4a7b-b994-43866774ef8e\") " Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025111 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025130 4823 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025139 4823 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025148 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlvdz\" (UniqueName: \"kubernetes.io/projected/ff90f069-d94f-4af4-958c-4e43099fe702-kube-api-access-nlvdz\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025159 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025168 4823 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025175 4823 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025183 4823 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff90f069-d94f-4af4-958c-4e43099fe702-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025191 4823 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ff90f069-d94f-4af4-958c-4e43099fe702-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.025374 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-utilities" (OuterVolumeSpecName: "utilities") pod "28a3febc-8c77-4a7b-b994-43866774ef8e" (UID: "28a3febc-8c77-4a7b-b994-43866774ef8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.028602 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a3febc-8c77-4a7b-b994-43866774ef8e-kube-api-access-twqfs" (OuterVolumeSpecName: "kube-api-access-twqfs") pod "28a3febc-8c77-4a7b-b994-43866774ef8e" (UID: "28a3febc-8c77-4a7b-b994-43866774ef8e"). InnerVolumeSpecName "kube-api-access-twqfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.082366 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28a3febc-8c77-4a7b-b994-43866774ef8e" (UID: "28a3febc-8c77-4a7b-b994-43866774ef8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.127772 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twqfs\" (UniqueName: \"kubernetes.io/projected/28a3febc-8c77-4a7b-b994-43866774ef8e-kube-api-access-twqfs\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.127808 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.127823 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a3febc-8c77-4a7b-b994-43866774ef8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.328006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" event={"ID":"ff90f069-d94f-4af4-958c-4e43099fe702","Type":"ContainerDied","Data":"e1acae069ad296a76eacb837a2a28623bcc3f07ba4c52732000aa93fcfcb3258"} Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.328089 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1acae069ad296a76eacb837a2a28623bcc3f07ba4c52732000aa93fcfcb3258" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.328119 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sfkrc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.332035 4823 generic.go:334] "Generic (PLEG): container finished" podID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerID="2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241" exitCode=0 Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.332099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerDied","Data":"2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241"} Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.332143 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s28bz" event={"ID":"28a3febc-8c77-4a7b-b994-43866774ef8e","Type":"ContainerDied","Data":"623a091ca0eee554c1a704e97e607ac527ceb797f1bb1f348db290e35e30b4dd"} Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.332162 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s28bz" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.332172 4823 scope.go:117] "RemoveContainer" containerID="2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.375895 4823 scope.go:117] "RemoveContainer" containerID="257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.402118 4823 scope.go:117] "RemoveContainer" containerID="4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.405624 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s28bz"] Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.418272 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s28bz"] Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.460339 4823 scope.go:117] "RemoveContainer" containerID="2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.461174 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc"] Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.461750 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="registry-server" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.461777 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="registry-server" Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.461810 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="extract-content" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.461819 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="extract-content" Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.461832 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="extract-utilities" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.461840 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="extract-utilities" Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.461892 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff90f069-d94f-4af4-958c-4e43099fe702" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.461903 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff90f069-d94f-4af4-958c-4e43099fe702" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.462140 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" containerName="registry-server" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.462164 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff90f069-d94f-4af4-958c-4e43099fe702" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.463088 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.465425 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241\": container with ID starting with 2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241 not found: ID does not exist" containerID="2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.465476 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241"} err="failed to get container status \"2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241\": rpc error: code = NotFound desc = could not find container \"2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241\": container with ID starting with 2f2fa875ed80c66f1c9e12ce6e32b75ac3f3da670bd2ccded43496b1700e8241 not found: ID does not exist" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.465506 4823 scope.go:117] "RemoveContainer" containerID="257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.465698 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.466799 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768\": container with ID starting with 257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768 not found: ID does not exist" containerID="257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.467308 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768"} err="failed to get container status \"257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768\": rpc error: code = NotFound desc = could not find container \"257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768\": container with ID starting with 257aeedd4d033ddfbfe04378e7079e28e57627a13e970c2898e0e88fcede4768 not found: ID does not exist" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.467331 4823 scope.go:117] "RemoveContainer" containerID="4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.466995 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.467063 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkxhd" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.467139 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.467263 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 18:05:02 crc kubenswrapper[4823]: E0121 18:05:02.471209 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9\": container with ID starting with 4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9 not found: ID does not exist" containerID="4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.471245 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9"} err="failed to get container status \"4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9\": rpc error: code = NotFound desc = could not find container \"4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9\": container with ID starting with 4cd38d694853217ec9ba66cce3969c0c032dd3bab344b67f831bc30b2641d1e9 not found: ID does not exist" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.475202 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc"] Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.539724 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.539981 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.540164 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.540250 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.540322 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtkq\" (UniqueName: \"kubernetes.io/projected/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-kube-api-access-jvtkq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.540426 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.540557 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.642514 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.642961 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.643121 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.643360 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.643514 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.643650 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvtkq\" (UniqueName: \"kubernetes.io/projected/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-kube-api-access-jvtkq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.643792 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.648069 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.648111 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.648888 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.649380 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.649821 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.655374 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.662623 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvtkq\" (UniqueName: \"kubernetes.io/projected/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-kube-api-access-jvtkq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:02 crc kubenswrapper[4823]: I0121 18:05:02.817182 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:05:03 crc kubenswrapper[4823]: I0121 18:05:03.360150 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a3febc-8c77-4a7b-b994-43866774ef8e" path="/var/lib/kubelet/pods/28a3febc-8c77-4a7b-b994-43866774ef8e/volumes" Jan 21 18:05:03 crc kubenswrapper[4823]: I0121 18:05:03.406001 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc"] Jan 21 18:05:03 crc kubenswrapper[4823]: W0121 18:05:03.411073 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6be0ba_bfb2_4685_a2b4_24f4f8fe79cf.slice/crio-c570c87f48e93881165fb8f7f54688af15cc9b775819aee006d8ea11c224386b WatchSource:0}: Error finding container c570c87f48e93881165fb8f7f54688af15cc9b775819aee006d8ea11c224386b: Status 404 returned error can't find the container with id c570c87f48e93881165fb8f7f54688af15cc9b775819aee006d8ea11c224386b Jan 21 18:05:04 crc kubenswrapper[4823]: I0121 18:05:04.364434 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" event={"ID":"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf","Type":"ContainerStarted","Data":"814968f511a3469c589fe02e8ddfd8749db8d399aa8bdca1fa83a8de8db64bf2"} Jan 21 18:05:04 crc kubenswrapper[4823]: I0121 18:05:04.365227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" event={"ID":"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf","Type":"ContainerStarted","Data":"c570c87f48e93881165fb8f7f54688af15cc9b775819aee006d8ea11c224386b"} Jan 21 18:05:04 crc kubenswrapper[4823]: I0121 18:05:04.397220 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" podStartSLOduration=1.847645906 podStartE2EDuration="2.397194742s" podCreationTimestamp="2026-01-21 18:05:02 +0000 UTC" firstStartedPulling="2026-01-21 18:05:03.413511513 +0000 UTC m=+2904.339642373" lastFinishedPulling="2026-01-21 18:05:03.963060349 +0000 UTC m=+2904.889191209" observedRunningTime="2026-01-21 18:05:04.385832681 +0000 UTC m=+2905.311963541" watchObservedRunningTime="2026-01-21 18:05:04.397194742 +0000 UTC m=+2905.323325602" Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.070070 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.070636 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.070686 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.071478 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.071528 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" gracePeriod=600 Jan 21 18:05:15 crc kubenswrapper[4823]: E0121 18:05:15.203966 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.477976 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" exitCode=0 Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.478032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48"} Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.478071 4823 scope.go:117] "RemoveContainer" containerID="981166277ef96120a45297ce5b5c3d23d592c823e623793778dddeac0620c8d0" Jan 21 18:05:15 crc kubenswrapper[4823]: I0121 18:05:15.479922 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:05:15 crc kubenswrapper[4823]: E0121 18:05:15.480568 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:05:26 crc kubenswrapper[4823]: I0121 18:05:26.345250 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:05:26 crc kubenswrapper[4823]: E0121 18:05:26.346791 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:05:37 crc kubenswrapper[4823]: I0121 18:05:37.343492 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:05:37 crc kubenswrapper[4823]: E0121 18:05:37.344246 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:05:50 crc kubenswrapper[4823]: I0121 18:05:50.344019 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:05:50 crc kubenswrapper[4823]: E0121 18:05:50.344872 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:06:02 crc kubenswrapper[4823]: I0121 18:06:02.344031 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:06:02 crc kubenswrapper[4823]: E0121 18:06:02.344872 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:06:16 crc kubenswrapper[4823]: I0121 18:06:16.344260 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:06:16 crc kubenswrapper[4823]: E0121 18:06:16.345219 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:06:29 crc kubenswrapper[4823]: I0121 18:06:29.344149 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:06:29 crc kubenswrapper[4823]: E0121 18:06:29.345047 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:06:43 crc kubenswrapper[4823]: I0121 18:06:43.344422 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:06:43 crc kubenswrapper[4823]: E0121 18:06:43.345280 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:06:55 crc kubenswrapper[4823]: I0121 18:06:55.345125 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:06:55 crc kubenswrapper[4823]: E0121 18:06:55.346321 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:07:09 crc kubenswrapper[4823]: I0121 18:07:09.350825 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:07:09 crc kubenswrapper[4823]: E0121 18:07:09.351541 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:07:09 crc kubenswrapper[4823]: I0121 18:07:09.614771 4823 generic.go:334] "Generic (PLEG): container finished" podID="5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" containerID="814968f511a3469c589fe02e8ddfd8749db8d399aa8bdca1fa83a8de8db64bf2" exitCode=0 Jan 21 18:07:09 crc kubenswrapper[4823]: I0121 18:07:09.614827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" event={"ID":"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf","Type":"ContainerDied","Data":"814968f511a3469c589fe02e8ddfd8749db8d399aa8bdca1fa83a8de8db64bf2"} Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.078789 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.197612 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-0\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.197708 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-1\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.197816 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-telemetry-combined-ca-bundle\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.197876 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-inventory\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.197985 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-2\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.198013 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvtkq\" (UniqueName: \"kubernetes.io/projected/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-kube-api-access-jvtkq\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.198060 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ssh-key-openstack-edpm-ipam\") pod \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\" (UID: \"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf\") " Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.203611 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-kube-api-access-jvtkq" (OuterVolumeSpecName: "kube-api-access-jvtkq") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "kube-api-access-jvtkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.204988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.229834 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.230808 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.234078 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-inventory" (OuterVolumeSpecName: "inventory") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.234141 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.241786 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" (UID: "5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.301532 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.301815 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvtkq\" (UniqueName: \"kubernetes.io/projected/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-kube-api-access-jvtkq\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.301932 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.302009 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.302106 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.302187 4823 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.302273 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.637784 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" event={"ID":"5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf","Type":"ContainerDied","Data":"c570c87f48e93881165fb8f7f54688af15cc9b775819aee006d8ea11c224386b"} Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.637828 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c570c87f48e93881165fb8f7f54688af15cc9b775819aee006d8ea11c224386b" Jan 21 18:07:11 crc kubenswrapper[4823]: I0121 18:07:11.637947 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.442816 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gkx8c"] Jan 21 18:07:16 crc kubenswrapper[4823]: E0121 18:07:16.443886 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.443903 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.444119 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.445516 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.475903 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gkx8c"] Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.522043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-utilities\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.522241 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxk8w\" (UniqueName: \"kubernetes.io/projected/544bb921-7393-45c7-a245-396e846ae753-kube-api-access-sxk8w\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.522366 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-catalog-content\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.624604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-utilities\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.624678 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxk8w\" (UniqueName: \"kubernetes.io/projected/544bb921-7393-45c7-a245-396e846ae753-kube-api-access-sxk8w\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.624731 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-catalog-content\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.625155 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-utilities\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.625244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-catalog-content\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.643893 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxk8w\" (UniqueName: \"kubernetes.io/projected/544bb921-7393-45c7-a245-396e846ae753-kube-api-access-sxk8w\") pod \"redhat-operators-gkx8c\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:16 crc kubenswrapper[4823]: I0121 18:07:16.768191 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:17 crc kubenswrapper[4823]: I0121 18:07:17.284218 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gkx8c"] Jan 21 18:07:17 crc kubenswrapper[4823]: I0121 18:07:17.711264 4823 generic.go:334] "Generic (PLEG): container finished" podID="544bb921-7393-45c7-a245-396e846ae753" containerID="88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3" exitCode=0 Jan 21 18:07:17 crc kubenswrapper[4823]: I0121 18:07:17.711367 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerDied","Data":"88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3"} Jan 21 18:07:17 crc kubenswrapper[4823]: I0121 18:07:17.711424 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerStarted","Data":"5b2aaccb956d02a4fc2a141f1b671f83b9f6273bd1532b6766d31e43113d0497"} Jan 21 18:07:19 crc kubenswrapper[4823]: I0121 18:07:19.731425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerStarted","Data":"7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8"} Jan 21 18:07:21 crc kubenswrapper[4823]: I0121 18:07:21.769558 4823 generic.go:334] "Generic (PLEG): container finished" podID="544bb921-7393-45c7-a245-396e846ae753" containerID="7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8" exitCode=0 Jan 21 18:07:21 crc kubenswrapper[4823]: I0121 18:07:21.769676 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerDied","Data":"7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8"} Jan 21 18:07:24 crc kubenswrapper[4823]: I0121 18:07:24.344477 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:07:24 crc kubenswrapper[4823]: E0121 18:07:24.345319 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:07:24 crc kubenswrapper[4823]: I0121 18:07:24.805884 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerStarted","Data":"21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8"} Jan 21 18:07:24 crc kubenswrapper[4823]: I0121 18:07:24.832684 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gkx8c" podStartSLOduration=2.561922577 podStartE2EDuration="8.832648416s" podCreationTimestamp="2026-01-21 18:07:16 +0000 UTC" firstStartedPulling="2026-01-21 18:07:17.714912253 +0000 UTC m=+3038.641043113" lastFinishedPulling="2026-01-21 18:07:23.985638092 +0000 UTC m=+3044.911768952" observedRunningTime="2026-01-21 18:07:24.828599976 +0000 UTC m=+3045.754730876" watchObservedRunningTime="2026-01-21 18:07:24.832648416 +0000 UTC m=+3045.758779266" Jan 21 18:07:26 crc kubenswrapper[4823]: I0121 18:07:26.769127 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:26 crc kubenswrapper[4823]: I0121 18:07:26.769505 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:27 crc kubenswrapper[4823]: I0121 18:07:27.823467 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gkx8c" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="registry-server" probeResult="failure" output=< Jan 21 18:07:27 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 21 18:07:27 crc kubenswrapper[4823]: > Jan 21 18:07:35 crc kubenswrapper[4823]: I0121 18:07:35.344179 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:07:35 crc kubenswrapper[4823]: E0121 18:07:35.345069 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:07:36 crc kubenswrapper[4823]: I0121 18:07:36.867726 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:36 crc kubenswrapper[4823]: I0121 18:07:36.926070 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:37 crc kubenswrapper[4823]: I0121 18:07:37.113568 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gkx8c"] Jan 21 18:07:37 crc kubenswrapper[4823]: I0121 18:07:37.931583 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gkx8c" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="registry-server" containerID="cri-o://21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8" gracePeriod=2 Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.460808 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.506978 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxk8w\" (UniqueName: \"kubernetes.io/projected/544bb921-7393-45c7-a245-396e846ae753-kube-api-access-sxk8w\") pod \"544bb921-7393-45c7-a245-396e846ae753\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.507199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-catalog-content\") pod \"544bb921-7393-45c7-a245-396e846ae753\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.507239 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-utilities\") pod \"544bb921-7393-45c7-a245-396e846ae753\" (UID: \"544bb921-7393-45c7-a245-396e846ae753\") " Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.509822 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-utilities" (OuterVolumeSpecName: "utilities") pod "544bb921-7393-45c7-a245-396e846ae753" (UID: "544bb921-7393-45c7-a245-396e846ae753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.513301 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544bb921-7393-45c7-a245-396e846ae753-kube-api-access-sxk8w" (OuterVolumeSpecName: "kube-api-access-sxk8w") pod "544bb921-7393-45c7-a245-396e846ae753" (UID: "544bb921-7393-45c7-a245-396e846ae753"). InnerVolumeSpecName "kube-api-access-sxk8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.609994 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.610033 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxk8w\" (UniqueName: \"kubernetes.io/projected/544bb921-7393-45c7-a245-396e846ae753-kube-api-access-sxk8w\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.628029 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "544bb921-7393-45c7-a245-396e846ae753" (UID: "544bb921-7393-45c7-a245-396e846ae753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.711623 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544bb921-7393-45c7-a245-396e846ae753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.943290 4823 generic.go:334] "Generic (PLEG): container finished" podID="544bb921-7393-45c7-a245-396e846ae753" containerID="21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8" exitCode=0 Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.943350 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerDied","Data":"21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8"} Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.943379 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkx8c" event={"ID":"544bb921-7393-45c7-a245-396e846ae753","Type":"ContainerDied","Data":"5b2aaccb956d02a4fc2a141f1b671f83b9f6273bd1532b6766d31e43113d0497"} Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.943415 4823 scope.go:117] "RemoveContainer" containerID="21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.943433 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkx8c" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.971608 4823 scope.go:117] "RemoveContainer" containerID="7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8" Jan 21 18:07:38 crc kubenswrapper[4823]: I0121 18:07:38.990393 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gkx8c"] Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.000406 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gkx8c"] Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.014002 4823 scope.go:117] "RemoveContainer" containerID="88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.055819 4823 scope.go:117] "RemoveContainer" containerID="21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8" Jan 21 18:07:39 crc kubenswrapper[4823]: E0121 18:07:39.058300 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8\": container with ID starting with 21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8 not found: ID does not exist" containerID="21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.058352 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8"} err="failed to get container status \"21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8\": rpc error: code = NotFound desc = could not find container \"21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8\": container with ID starting with 21f1f1e8321e8848bc98cf0ada32d07b0b6f34e9ca737d82f87d4d060455fec8 not found: ID does not exist" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.058378 4823 scope.go:117] "RemoveContainer" containerID="7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8" Jan 21 18:07:39 crc kubenswrapper[4823]: E0121 18:07:39.060281 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8\": container with ID starting with 7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8 not found: ID does not exist" containerID="7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.060315 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8"} err="failed to get container status \"7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8\": rpc error: code = NotFound desc = could not find container \"7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8\": container with ID starting with 7befdb19443b3dc9c55a8ab9b7febb39f187beb63223e6c723d4d657f43738b8 not found: ID does not exist" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.060330 4823 scope.go:117] "RemoveContainer" containerID="88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3" Jan 21 18:07:39 crc kubenswrapper[4823]: E0121 18:07:39.060546 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3\": container with ID starting with 88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3 not found: ID does not exist" containerID="88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.060561 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3"} err="failed to get container status \"88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3\": rpc error: code = NotFound desc = could not find container \"88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3\": container with ID starting with 88be7dbd681bb5f7d1ac9a7dab445dae2b18b4709494e3e7301122487c73b6e3 not found: ID does not exist" Jan 21 18:07:39 crc kubenswrapper[4823]: I0121 18:07:39.371390 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544bb921-7393-45c7-a245-396e846ae753" path="/var/lib/kubelet/pods/544bb921-7393-45c7-a245-396e846ae753/volumes" Jan 21 18:07:47 crc kubenswrapper[4823]: I0121 18:07:47.344070 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:07:47 crc kubenswrapper[4823]: E0121 18:07:47.345890 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:07:49 crc kubenswrapper[4823]: I0121 18:07:49.320548 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 18:07:49 crc kubenswrapper[4823]: I0121 18:07:49.321746 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="prometheus" containerID="cri-o://043e516cedffa753f7f731b672f310e54f123c06f9867079326d1b3e5e969586" gracePeriod=600 Jan 21 18:07:49 crc kubenswrapper[4823]: I0121 18:07:49.322574 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="thanos-sidecar" containerID="cri-o://3e2743663b971ddf944026dc05c9b4aa6425058354d28f36254dd7c7d0d23832" gracePeriod=600 Jan 21 18:07:49 crc kubenswrapper[4823]: I0121 18:07:49.322655 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="config-reloader" containerID="cri-o://2f21487b13c87d48f26faeffe5c3794dcaa2e08c64a87b5b17f685d0aea18e90" gracePeriod=600 Jan 21 18:07:49 crc kubenswrapper[4823]: I0121 18:07:49.700273 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/ready\": dial tcp 10.217.0.167:9090: connect: connection refused" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.083351 4823 generic.go:334] "Generic (PLEG): container finished" podID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerID="3e2743663b971ddf944026dc05c9b4aa6425058354d28f36254dd7c7d0d23832" exitCode=0 Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.083382 4823 generic.go:334] "Generic (PLEG): container finished" podID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerID="2f21487b13c87d48f26faeffe5c3794dcaa2e08c64a87b5b17f685d0aea18e90" exitCode=0 Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.083390 4823 generic.go:334] "Generic (PLEG): container finished" podID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerID="043e516cedffa753f7f731b672f310e54f123c06f9867079326d1b3e5e969586" exitCode=0 Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.083413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerDied","Data":"3e2743663b971ddf944026dc05c9b4aa6425058354d28f36254dd7c7d0d23832"} Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.083438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerDied","Data":"2f21487b13c87d48f26faeffe5c3794dcaa2e08c64a87b5b17f685d0aea18e90"} Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.083448 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerDied","Data":"043e516cedffa753f7f731b672f310e54f123c06f9867079326d1b3e5e969586"} Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.360295 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.390585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.390758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-secret-combined-ca-bundle\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.390869 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.390903 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whsj\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-kube-api-access-5whsj\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.390942 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-1\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.390980 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-tls-assets\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config-out\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391105 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-thanos-prometheus-http-client-file\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391182 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-0\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391365 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.391402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-2\") pod \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\" (UID: \"89602b0f-8b51-492c-aa76-bd3224e1b8a5\") " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.392866 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.402715 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.403261 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.403670 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.407800 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-kube-api-access-5whsj" (OuterVolumeSpecName: "kube-api-access-5whsj") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "kube-api-access-5whsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.408043 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.410980 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.411052 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config" (OuterVolumeSpecName: "config") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.413140 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config-out" (OuterVolumeSpecName: "config-out") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.426045 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.427704 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.458040 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.494517 4823 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.494887 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whsj\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-kube-api-access-5whsj\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.494982 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495111 4823 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495203 4823 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89602b0f-8b51-492c-aa76-bd3224e1b8a5-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495277 4823 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495364 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495449 4823 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495536 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495649 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") on node \"crc\" " Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495772 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/89602b0f-8b51-492c-aa76-bd3224e1b8a5-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.495879 4823 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.514966 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config" (OuterVolumeSpecName: "web-config") pod "89602b0f-8b51-492c-aa76-bd3224e1b8a5" (UID: "89602b0f-8b51-492c-aa76-bd3224e1b8a5"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.530451 4823 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.531389 4823 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158") on node "crc" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.598062 4823 reconciler_common.go:293] "Volume detached for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:50 crc kubenswrapper[4823]: I0121 18:07:50.598103 4823 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89602b0f-8b51-492c-aa76-bd3224e1b8a5-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.093836 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"89602b0f-8b51-492c-aa76-bd3224e1b8a5","Type":"ContainerDied","Data":"b9245ea8bfe003013eda0999cf69d1573d774686b84b0499b077f1ef0c0f609f"} Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.093913 4823 scope.go:117] "RemoveContainer" containerID="3e2743663b971ddf944026dc05c9b4aa6425058354d28f36254dd7c7d0d23832" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.093971 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.147331 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.149586 4823 scope.go:117] "RemoveContainer" containerID="2f21487b13c87d48f26faeffe5c3794dcaa2e08c64a87b5b17f685d0aea18e90" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.162273 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180119 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180506 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="extract-utilities" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180522 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="extract-utilities" Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180533 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="init-config-reloader" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180540 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="init-config-reloader" Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180554 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="thanos-sidecar" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180562 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="thanos-sidecar" Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180575 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="config-reloader" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180580 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="config-reloader" Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180593 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="extract-content" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180598 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="extract-content" Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180617 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="prometheus" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180623 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="prometheus" Jan 21 18:07:51 crc kubenswrapper[4823]: E0121 18:07:51.180632 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="registry-server" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180638 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="registry-server" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180816 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="prometheus" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180832 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="thanos-sidecar" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180842 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="544bb921-7393-45c7-a245-396e846ae753" containerName="registry-server" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.180878 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" containerName="config-reloader" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.182990 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.187499 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.187732 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.189653 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.197133 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.198202 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hndk8" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.198472 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.198550 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.198621 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.200347 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.203028 4823 scope.go:117] "RemoveContainer" containerID="043e516cedffa753f7f731b672f310e54f123c06f9867079326d1b3e5e969586" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.228235 4823 scope.go:117] "RemoveContainer" containerID="8fea9c777a706a8c76799da43129933f5e7c503735e994e92b38106673cd1d99" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.314828 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.314963 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crj7r\" (UniqueName: \"kubernetes.io/projected/c7dff15e-5ba2-490e-8660-5e0132b84f0f-kube-api-access-crj7r\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7dff15e-5ba2-490e-8660-5e0132b84f0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315197 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315232 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7dff15e-5ba2-490e-8660-5e0132b84f0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315293 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315385 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315414 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.315471 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.356329 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89602b0f-8b51-492c-aa76-bd3224e1b8a5" path="/var/lib/kubelet/pods/89602b0f-8b51-492c-aa76-bd3224e1b8a5/volumes" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.417806 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.417997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crj7r\" (UniqueName: \"kubernetes.io/projected/c7dff15e-5ba2-490e-8660-5e0132b84f0f-kube-api-access-crj7r\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418087 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418158 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7dff15e-5ba2-490e-8660-5e0132b84f0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418213 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418254 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418286 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7dff15e-5ba2-490e-8660-5e0132b84f0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418400 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.418461 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.421229 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.423220 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.423297 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c7dff15e-5ba2-490e-8660-5e0132b84f0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.424437 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.424477 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3bf7d8e6071accb44cb216af941703c855ece813d1c3a48f9936e31f1ede18e7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.424775 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.425132 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7dff15e-5ba2-490e-8660-5e0132b84f0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.425408 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.426090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.435760 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.435827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.438448 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dff15e-5ba2-490e-8660-5e0132b84f0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.442917 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7dff15e-5ba2-490e-8660-5e0132b84f0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.449264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crj7r\" (UniqueName: \"kubernetes.io/projected/c7dff15e-5ba2-490e-8660-5e0132b84f0f-kube-api-access-crj7r\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.480343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6fde40e-fb4b-4282-a8a9-10033fc93158\") pod \"prometheus-metric-storage-0\" (UID: \"c7dff15e-5ba2-490e-8660-5e0132b84f0f\") " pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:51 crc kubenswrapper[4823]: I0121 18:07:51.523844 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 18:07:52 crc kubenswrapper[4823]: I0121 18:07:52.011585 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 18:07:52 crc kubenswrapper[4823]: I0121 18:07:52.105943 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7dff15e-5ba2-490e-8660-5e0132b84f0f","Type":"ContainerStarted","Data":"136a3571c0defd5a2e9240a02fc109508e29b60e76674d1950b1df38cd090710"} Jan 21 18:07:56 crc kubenswrapper[4823]: I0121 18:07:56.148751 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7dff15e-5ba2-490e-8660-5e0132b84f0f","Type":"ContainerStarted","Data":"357d8713089b6ab75ca0addcce550d8e6f437574f77707ec0617e40713707d1d"} Jan 21 18:08:00 crc kubenswrapper[4823]: I0121 18:08:00.281625 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:08:00 crc kubenswrapper[4823]: E0121 18:08:00.283398 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:08:02 crc kubenswrapper[4823]: I0121 18:08:02.308024 4823 generic.go:334] "Generic (PLEG): container finished" podID="c7dff15e-5ba2-490e-8660-5e0132b84f0f" containerID="357d8713089b6ab75ca0addcce550d8e6f437574f77707ec0617e40713707d1d" exitCode=0 Jan 21 18:08:02 crc kubenswrapper[4823]: I0121 18:08:02.308511 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7dff15e-5ba2-490e-8660-5e0132b84f0f","Type":"ContainerDied","Data":"357d8713089b6ab75ca0addcce550d8e6f437574f77707ec0617e40713707d1d"} Jan 21 18:08:03 crc kubenswrapper[4823]: I0121 18:08:03.318969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7dff15e-5ba2-490e-8660-5e0132b84f0f","Type":"ContainerStarted","Data":"02d801b4c3aab11d56e50c0e58c253e37dfada7a16297d54ebaa7ca45a68a2bb"} Jan 21 18:08:07 crc kubenswrapper[4823]: I0121 18:08:07.381369 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7dff15e-5ba2-490e-8660-5e0132b84f0f","Type":"ContainerStarted","Data":"fd99c66959ffa5f0623ce3935a4c1cda6c2fa2d634e1053f994f1b2ff019d6a5"} Jan 21 18:08:07 crc kubenswrapper[4823]: I0121 18:08:07.383572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7dff15e-5ba2-490e-8660-5e0132b84f0f","Type":"ContainerStarted","Data":"3bc23370775b84c17ef22c11609b0e90fda8658dcaa9f1e2c8fd19fb4d0e861e"} Jan 21 18:08:07 crc kubenswrapper[4823]: I0121 18:08:07.421385 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.421365487 podStartE2EDuration="16.421365487s" podCreationTimestamp="2026-01-21 18:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:08:07.414312743 +0000 UTC m=+3088.340443603" watchObservedRunningTime="2026-01-21 18:08:07.421365487 +0000 UTC m=+3088.347496347" Jan 21 18:08:11 crc kubenswrapper[4823]: I0121 18:08:11.344727 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:08:11 crc kubenswrapper[4823]: E0121 18:08:11.345584 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:08:11 crc kubenswrapper[4823]: I0121 18:08:11.524971 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 18:08:21 crc kubenswrapper[4823]: I0121 18:08:21.525186 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 18:08:21 crc kubenswrapper[4823]: I0121 18:08:21.533390 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 18:08:22 crc kubenswrapper[4823]: I0121 18:08:22.531602 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 18:08:26 crc kubenswrapper[4823]: I0121 18:08:26.343786 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:08:26 crc kubenswrapper[4823]: E0121 18:08:26.344788 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:08:41 crc kubenswrapper[4823]: I0121 18:08:41.343743 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:08:41 crc kubenswrapper[4823]: E0121 18:08:41.344752 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.040103 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.042447 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.046211 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.046503 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.046552 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.046587 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-922mx" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.055935 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.087016 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.087132 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.087418 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-config-data\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189418 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp6v\" (UniqueName: \"kubernetes.io/projected/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-kube-api-access-6lp6v\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-config-data\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189521 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189610 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.189789 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.191665 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-config-data\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.193064 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.197979 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.291991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.292165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.292252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.292350 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.292393 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.292560 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lp6v\" (UniqueName: \"kubernetes.io/projected/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-kube-api-access-6lp6v\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.293142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.293177 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.293462 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.296498 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.296679 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.309680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lp6v\" (UniqueName: \"kubernetes.io/projected/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-kube-api-access-6lp6v\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.332735 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.391145 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.907818 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 18:08:46 crc kubenswrapper[4823]: W0121 18:08:46.914262 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0110a9b7_0ded_42e4_b57b_7d6f8bf5f62f.slice/crio-f92797d626577d38462c12e5884108f2d7ec787263a2e8087ca8574d4d780162 WatchSource:0}: Error finding container f92797d626577d38462c12e5884108f2d7ec787263a2e8087ca8574d4d780162: Status 404 returned error can't find the container with id f92797d626577d38462c12e5884108f2d7ec787263a2e8087ca8574d4d780162 Jan 21 18:08:46 crc kubenswrapper[4823]: I0121 18:08:46.916578 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:08:47 crc kubenswrapper[4823]: I0121 18:08:47.761693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f","Type":"ContainerStarted","Data":"f92797d626577d38462c12e5884108f2d7ec787263a2e8087ca8574d4d780162"} Jan 21 18:08:55 crc kubenswrapper[4823]: I0121 18:08:55.344330 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:08:55 crc kubenswrapper[4823]: E0121 18:08:55.344976 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:08:59 crc kubenswrapper[4823]: E0121 18:08:59.497455 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.128:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Jan 21 18:08:59 crc kubenswrapper[4823]: E0121 18:08:59.498394 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.128:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Jan 21 18:08:59 crc kubenswrapper[4823]: E0121 18:08:59.498756 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.128:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 18:08:59 crc kubenswrapper[4823]: E0121 18:08:59.500135 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" Jan 21 18:08:59 crc kubenswrapper[4823]: E0121 18:08:59.894737 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.128:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest\\\"\"" pod="openstack/tempest-tests-tempest" podUID="0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" Jan 21 18:09:06 crc kubenswrapper[4823]: I0121 18:09:06.343589 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:09:06 crc kubenswrapper[4823]: E0121 18:09:06.344255 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:09:14 crc kubenswrapper[4823]: I0121 18:09:14.063944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f","Type":"ContainerStarted","Data":"9f2877648488784246bb2d2d079277649debcb3a3e9036f27e442284c15d7dbf"} Jan 21 18:09:14 crc kubenswrapper[4823]: I0121 18:09:14.097889 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.60304155 podStartE2EDuration="29.097849449s" podCreationTimestamp="2026-01-21 18:08:45 +0000 UTC" firstStartedPulling="2026-01-21 18:08:46.916373132 +0000 UTC m=+3127.842503992" lastFinishedPulling="2026-01-21 18:09:12.411181031 +0000 UTC m=+3153.337311891" observedRunningTime="2026-01-21 18:09:14.087268347 +0000 UTC m=+3155.013399267" watchObservedRunningTime="2026-01-21 18:09:14.097849449 +0000 UTC m=+3155.023980309" Jan 21 18:09:21 crc kubenswrapper[4823]: I0121 18:09:21.346629 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:09:21 crc kubenswrapper[4823]: E0121 18:09:21.347577 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:09:35 crc kubenswrapper[4823]: I0121 18:09:35.344327 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:09:35 crc kubenswrapper[4823]: E0121 18:09:35.345102 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:09:49 crc kubenswrapper[4823]: I0121 18:09:49.354269 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:09:49 crc kubenswrapper[4823]: E0121 18:09:49.355365 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:10:01 crc kubenswrapper[4823]: I0121 18:10:01.344154 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:10:01 crc kubenswrapper[4823]: E0121 18:10:01.344955 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:10:12 crc kubenswrapper[4823]: I0121 18:10:12.966298 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5w8"] Jan 21 18:10:12 crc kubenswrapper[4823]: I0121 18:10:12.968676 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:12 crc kubenswrapper[4823]: I0121 18:10:12.985361 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5w8"] Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.081230 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-catalog-content\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.081342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4jb\" (UniqueName: \"kubernetes.io/projected/4755a5ad-8acd-43a2-a62f-65b706307668-kube-api-access-hw4jb\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.081484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-utilities\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.183307 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-catalog-content\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.183420 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw4jb\" (UniqueName: \"kubernetes.io/projected/4755a5ad-8acd-43a2-a62f-65b706307668-kube-api-access-hw4jb\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.183539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-utilities\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.183834 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-catalog-content\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.184160 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-utilities\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.218721 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw4jb\" (UniqueName: \"kubernetes.io/projected/4755a5ad-8acd-43a2-a62f-65b706307668-kube-api-access-hw4jb\") pod \"redhat-marketplace-hj5w8\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.293318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:13 crc kubenswrapper[4823]: I0121 18:10:13.813102 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5w8"] Jan 21 18:10:14 crc kubenswrapper[4823]: I0121 18:10:14.343327 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:10:14 crc kubenswrapper[4823]: E0121 18:10:14.343835 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:10:14 crc kubenswrapper[4823]: I0121 18:10:14.727114 4823 generic.go:334] "Generic (PLEG): container finished" podID="4755a5ad-8acd-43a2-a62f-65b706307668" containerID="a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621" exitCode=0 Jan 21 18:10:14 crc kubenswrapper[4823]: I0121 18:10:14.727160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerDied","Data":"a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621"} Jan 21 18:10:14 crc kubenswrapper[4823]: I0121 18:10:14.727186 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerStarted","Data":"f3802652242473ca7cf8ec262ac3290c131960f4d7dfec829c2169dfde5c8f0f"} Jan 21 18:10:16 crc kubenswrapper[4823]: I0121 18:10:16.745317 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerStarted","Data":"47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643"} Jan 21 18:10:17 crc kubenswrapper[4823]: I0121 18:10:17.758582 4823 generic.go:334] "Generic (PLEG): container finished" podID="4755a5ad-8acd-43a2-a62f-65b706307668" containerID="47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643" exitCode=0 Jan 21 18:10:17 crc kubenswrapper[4823]: I0121 18:10:17.758693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerDied","Data":"47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643"} Jan 21 18:10:19 crc kubenswrapper[4823]: I0121 18:10:19.784913 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerStarted","Data":"fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605"} Jan 21 18:10:19 crc kubenswrapper[4823]: I0121 18:10:19.822163 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hj5w8" podStartSLOduration=3.853420044 podStartE2EDuration="7.822144275s" podCreationTimestamp="2026-01-21 18:10:12 +0000 UTC" firstStartedPulling="2026-01-21 18:10:14.72909013 +0000 UTC m=+3215.655220990" lastFinishedPulling="2026-01-21 18:10:18.697814361 +0000 UTC m=+3219.623945221" observedRunningTime="2026-01-21 18:10:19.809147914 +0000 UTC m=+3220.735278844" watchObservedRunningTime="2026-01-21 18:10:19.822144275 +0000 UTC m=+3220.748275145" Jan 21 18:10:23 crc kubenswrapper[4823]: I0121 18:10:23.294154 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:23 crc kubenswrapper[4823]: I0121 18:10:23.295291 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:23 crc kubenswrapper[4823]: I0121 18:10:23.359152 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:23 crc kubenswrapper[4823]: I0121 18:10:23.862936 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:23 crc kubenswrapper[4823]: I0121 18:10:23.918731 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5w8"] Jan 21 18:10:25 crc kubenswrapper[4823]: I0121 18:10:25.838779 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hj5w8" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="registry-server" containerID="cri-o://fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605" gracePeriod=2 Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.308369 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.359667 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw4jb\" (UniqueName: \"kubernetes.io/projected/4755a5ad-8acd-43a2-a62f-65b706307668-kube-api-access-hw4jb\") pod \"4755a5ad-8acd-43a2-a62f-65b706307668\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.359723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-catalog-content\") pod \"4755a5ad-8acd-43a2-a62f-65b706307668\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.359758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-utilities\") pod \"4755a5ad-8acd-43a2-a62f-65b706307668\" (UID: \"4755a5ad-8acd-43a2-a62f-65b706307668\") " Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.360965 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-utilities" (OuterVolumeSpecName: "utilities") pod "4755a5ad-8acd-43a2-a62f-65b706307668" (UID: "4755a5ad-8acd-43a2-a62f-65b706307668"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.365751 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4755a5ad-8acd-43a2-a62f-65b706307668-kube-api-access-hw4jb" (OuterVolumeSpecName: "kube-api-access-hw4jb") pod "4755a5ad-8acd-43a2-a62f-65b706307668" (UID: "4755a5ad-8acd-43a2-a62f-65b706307668"). InnerVolumeSpecName "kube-api-access-hw4jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.392923 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4755a5ad-8acd-43a2-a62f-65b706307668" (UID: "4755a5ad-8acd-43a2-a62f-65b706307668"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.462100 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw4jb\" (UniqueName: \"kubernetes.io/projected/4755a5ad-8acd-43a2-a62f-65b706307668-kube-api-access-hw4jb\") on node \"crc\" DevicePath \"\"" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.462147 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.462156 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a5ad-8acd-43a2-a62f-65b706307668-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.849543 4823 generic.go:334] "Generic (PLEG): container finished" podID="4755a5ad-8acd-43a2-a62f-65b706307668" containerID="fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605" exitCode=0 Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.849586 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerDied","Data":"fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605"} Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.849610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5w8" event={"ID":"4755a5ad-8acd-43a2-a62f-65b706307668","Type":"ContainerDied","Data":"f3802652242473ca7cf8ec262ac3290c131960f4d7dfec829c2169dfde5c8f0f"} Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.849625 4823 scope.go:117] "RemoveContainer" containerID="fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.849748 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj5w8" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.870056 4823 scope.go:117] "RemoveContainer" containerID="47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.889952 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5w8"] Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.899888 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5w8"] Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.908350 4823 scope.go:117] "RemoveContainer" containerID="a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.947721 4823 scope.go:117] "RemoveContainer" containerID="fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605" Jan 21 18:10:26 crc kubenswrapper[4823]: E0121 18:10:26.948203 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605\": container with ID starting with fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605 not found: ID does not exist" containerID="fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.948234 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605"} err="failed to get container status \"fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605\": rpc error: code = NotFound desc = could not find container \"fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605\": container with ID starting with fd6cf6ede309ec394e6b190cb1094441791f5ee89ec666c7c0a2725acb04e605 not found: ID does not exist" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.948255 4823 scope.go:117] "RemoveContainer" containerID="47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643" Jan 21 18:10:26 crc kubenswrapper[4823]: E0121 18:10:26.948582 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643\": container with ID starting with 47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643 not found: ID does not exist" containerID="47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.948712 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643"} err="failed to get container status \"47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643\": rpc error: code = NotFound desc = could not find container \"47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643\": container with ID starting with 47626fe462fe56b492ad602eb1aa4d4c9b3846cc713af5844cb23756aafa1643 not found: ID does not exist" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.948817 4823 scope.go:117] "RemoveContainer" containerID="a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621" Jan 21 18:10:26 crc kubenswrapper[4823]: E0121 18:10:26.949237 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621\": container with ID starting with a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621 not found: ID does not exist" containerID="a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621" Jan 21 18:10:26 crc kubenswrapper[4823]: I0121 18:10:26.949264 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621"} err="failed to get container status \"a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621\": rpc error: code = NotFound desc = could not find container \"a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621\": container with ID starting with a293b229a936f339ebcd2bbad2c149e0705b88afe5f329744b906bd0c7659621 not found: ID does not exist" Jan 21 18:10:27 crc kubenswrapper[4823]: I0121 18:10:27.357451 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" path="/var/lib/kubelet/pods/4755a5ad-8acd-43a2-a62f-65b706307668/volumes" Jan 21 18:10:29 crc kubenswrapper[4823]: I0121 18:10:29.358524 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:10:29 crc kubenswrapper[4823]: I0121 18:10:29.886710 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"ee845ec4eaabe33255a9e404193ffa91dda969aea60e5dee8732e82716b9a4da"} Jan 21 18:12:45 crc kubenswrapper[4823]: I0121 18:12:45.070650 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:12:45 crc kubenswrapper[4823]: I0121 18:12:45.071167 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:13:15 crc kubenswrapper[4823]: I0121 18:13:15.071507 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:13:15 crc kubenswrapper[4823]: I0121 18:13:15.072207 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.071414 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.071789 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.071834 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.073814 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee845ec4eaabe33255a9e404193ffa91dda969aea60e5dee8732e82716b9a4da"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.074074 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://ee845ec4eaabe33255a9e404193ffa91dda969aea60e5dee8732e82716b9a4da" gracePeriod=600 Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.830594 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="ee845ec4eaabe33255a9e404193ffa91dda969aea60e5dee8732e82716b9a4da" exitCode=0 Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.830671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"ee845ec4eaabe33255a9e404193ffa91dda969aea60e5dee8732e82716b9a4da"} Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.831269 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac"} Jan 21 18:13:45 crc kubenswrapper[4823]: I0121 18:13:45.831296 4823 scope.go:117] "RemoveContainer" containerID="25502812baebd3a7a494765976c2a4790a96d36dd0d8e51a600eab536bd7fc48" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.151907 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88"] Jan 21 18:15:00 crc kubenswrapper[4823]: E0121 18:15:00.153873 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="registry-server" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.153980 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="registry-server" Jan 21 18:15:00 crc kubenswrapper[4823]: E0121 18:15:00.154052 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="extract-content" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.154105 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="extract-content" Jan 21 18:15:00 crc kubenswrapper[4823]: E0121 18:15:00.154182 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="extract-utilities" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.154244 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="extract-utilities" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.154578 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4755a5ad-8acd-43a2-a62f-65b706307668" containerName="registry-server" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.155504 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.158291 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.158612 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.166310 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88"] Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.229387 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa07458f-9629-42c2-b374-e35e31a60f67-secret-volume\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.229464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsqh\" (UniqueName: \"kubernetes.io/projected/aa07458f-9629-42c2-b374-e35e31a60f67-kube-api-access-9tsqh\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.229552 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa07458f-9629-42c2-b374-e35e31a60f67-config-volume\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.331385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa07458f-9629-42c2-b374-e35e31a60f67-secret-volume\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.331466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsqh\" (UniqueName: \"kubernetes.io/projected/aa07458f-9629-42c2-b374-e35e31a60f67-kube-api-access-9tsqh\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.331533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa07458f-9629-42c2-b374-e35e31a60f67-config-volume\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.332754 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa07458f-9629-42c2-b374-e35e31a60f67-config-volume\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.339748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa07458f-9629-42c2-b374-e35e31a60f67-secret-volume\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.349891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsqh\" (UniqueName: \"kubernetes.io/projected/aa07458f-9629-42c2-b374-e35e31a60f67-kube-api-access-9tsqh\") pod \"collect-profiles-29483655-l4q88\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.484190 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:00 crc kubenswrapper[4823]: I0121 18:15:00.935806 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88"] Jan 21 18:15:01 crc kubenswrapper[4823]: I0121 18:15:01.547389 4823 generic.go:334] "Generic (PLEG): container finished" podID="aa07458f-9629-42c2-b374-e35e31a60f67" containerID="bb7e8731e2a9d8fe8aa7cd3bc1d3d696acf9d41c7c778cf1cb3546e1fd8e7b58" exitCode=0 Jan 21 18:15:01 crc kubenswrapper[4823]: I0121 18:15:01.547503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" event={"ID":"aa07458f-9629-42c2-b374-e35e31a60f67","Type":"ContainerDied","Data":"bb7e8731e2a9d8fe8aa7cd3bc1d3d696acf9d41c7c778cf1cb3546e1fd8e7b58"} Jan 21 18:15:01 crc kubenswrapper[4823]: I0121 18:15:01.547757 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" event={"ID":"aa07458f-9629-42c2-b374-e35e31a60f67","Type":"ContainerStarted","Data":"8ab6184d5c9009b219f53e899968e8381b10a16979fdc6c64c72be32844d169d"} Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.912011 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.989800 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa07458f-9629-42c2-b374-e35e31a60f67-config-volume\") pod \"aa07458f-9629-42c2-b374-e35e31a60f67\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.990303 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tsqh\" (UniqueName: \"kubernetes.io/projected/aa07458f-9629-42c2-b374-e35e31a60f67-kube-api-access-9tsqh\") pod \"aa07458f-9629-42c2-b374-e35e31a60f67\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.990357 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa07458f-9629-42c2-b374-e35e31a60f67-secret-volume\") pod \"aa07458f-9629-42c2-b374-e35e31a60f67\" (UID: \"aa07458f-9629-42c2-b374-e35e31a60f67\") " Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.990610 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa07458f-9629-42c2-b374-e35e31a60f67-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa07458f-9629-42c2-b374-e35e31a60f67" (UID: "aa07458f-9629-42c2-b374-e35e31a60f67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.991245 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa07458f-9629-42c2-b374-e35e31a60f67-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:02 crc kubenswrapper[4823]: I0121 18:15:02.997952 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa07458f-9629-42c2-b374-e35e31a60f67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa07458f-9629-42c2-b374-e35e31a60f67" (UID: "aa07458f-9629-42c2-b374-e35e31a60f67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.000004 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa07458f-9629-42c2-b374-e35e31a60f67-kube-api-access-9tsqh" (OuterVolumeSpecName: "kube-api-access-9tsqh") pod "aa07458f-9629-42c2-b374-e35e31a60f67" (UID: "aa07458f-9629-42c2-b374-e35e31a60f67"). InnerVolumeSpecName "kube-api-access-9tsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.093102 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tsqh\" (UniqueName: \"kubernetes.io/projected/aa07458f-9629-42c2-b374-e35e31a60f67-kube-api-access-9tsqh\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.093136 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa07458f-9629-42c2-b374-e35e31a60f67-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.568363 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" event={"ID":"aa07458f-9629-42c2-b374-e35e31a60f67","Type":"ContainerDied","Data":"8ab6184d5c9009b219f53e899968e8381b10a16979fdc6c64c72be32844d169d"} Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.568415 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab6184d5c9009b219f53e899968e8381b10a16979fdc6c64c72be32844d169d" Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.568424 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-l4q88" Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.980835 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r"] Jan 21 18:15:03 crc kubenswrapper[4823]: I0121 18:15:03.989395 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-5g22r"] Jan 21 18:15:05 crc kubenswrapper[4823]: I0121 18:15:05.358115 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef08832-b476-4255-b562-5aa266113f1f" path="/var/lib/kubelet/pods/2ef08832-b476-4255-b562-5aa266113f1f/volumes" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.236614 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zc4mv"] Jan 21 18:15:16 crc kubenswrapper[4823]: E0121 18:15:16.237574 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa07458f-9629-42c2-b374-e35e31a60f67" containerName="collect-profiles" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.237587 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa07458f-9629-42c2-b374-e35e31a60f67" containerName="collect-profiles" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.237763 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa07458f-9629-42c2-b374-e35e31a60f67" containerName="collect-profiles" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.239115 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.259048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-utilities\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.259106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crz4\" (UniqueName: \"kubernetes.io/projected/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-kube-api-access-2crz4\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.259209 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-catalog-content\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.268112 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zc4mv"] Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.392291 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-utilities\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.392363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crz4\" (UniqueName: \"kubernetes.io/projected/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-kube-api-access-2crz4\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.392509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-catalog-content\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.394253 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-utilities\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.394253 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-catalog-content\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.416389 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crz4\" (UniqueName: \"kubernetes.io/projected/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-kube-api-access-2crz4\") pod \"community-operators-zc4mv\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:16 crc kubenswrapper[4823]: I0121 18:15:16.564528 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:17 crc kubenswrapper[4823]: I0121 18:15:17.154011 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zc4mv"] Jan 21 18:15:17 crc kubenswrapper[4823]: I0121 18:15:17.702975 4823 generic.go:334] "Generic (PLEG): container finished" podID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerID="0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257" exitCode=0 Jan 21 18:15:17 crc kubenswrapper[4823]: I0121 18:15:17.703108 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerDied","Data":"0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257"} Jan 21 18:15:17 crc kubenswrapper[4823]: I0121 18:15:17.703356 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerStarted","Data":"916d6e3b2d3274ea9e7f5b526cf36cdd794aa728a069c16f7e4f0814d10c3292"} Jan 21 18:15:17 crc kubenswrapper[4823]: I0121 18:15:17.705730 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:15:19 crc kubenswrapper[4823]: I0121 18:15:19.722796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerStarted","Data":"6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a"} Jan 21 18:15:20 crc kubenswrapper[4823]: I0121 18:15:20.733386 4823 generic.go:334] "Generic (PLEG): container finished" podID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerID="6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a" exitCode=0 Jan 21 18:15:20 crc kubenswrapper[4823]: I0121 18:15:20.733444 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerDied","Data":"6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a"} Jan 21 18:15:21 crc kubenswrapper[4823]: I0121 18:15:21.743908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerStarted","Data":"7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6"} Jan 21 18:15:21 crc kubenswrapper[4823]: I0121 18:15:21.768954 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zc4mv" podStartSLOduration=2.341802803 podStartE2EDuration="5.768933468s" podCreationTimestamp="2026-01-21 18:15:16 +0000 UTC" firstStartedPulling="2026-01-21 18:15:17.70534948 +0000 UTC m=+3518.631480350" lastFinishedPulling="2026-01-21 18:15:21.132480155 +0000 UTC m=+3522.058611015" observedRunningTime="2026-01-21 18:15:21.760739336 +0000 UTC m=+3522.686870206" watchObservedRunningTime="2026-01-21 18:15:21.768933468 +0000 UTC m=+3522.695064328" Jan 21 18:15:26 crc kubenswrapper[4823]: I0121 18:15:26.564790 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:26 crc kubenswrapper[4823]: I0121 18:15:26.565388 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:26 crc kubenswrapper[4823]: I0121 18:15:26.614185 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:26 crc kubenswrapper[4823]: I0121 18:15:26.872841 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:26 crc kubenswrapper[4823]: I0121 18:15:26.922471 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zc4mv"] Jan 21 18:15:28 crc kubenswrapper[4823]: I0121 18:15:28.836372 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zc4mv" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="registry-server" containerID="cri-o://7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6" gracePeriod=2 Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.465078 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.555785 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2crz4\" (UniqueName: \"kubernetes.io/projected/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-kube-api-access-2crz4\") pod \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.555968 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-utilities\") pod \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.556024 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-catalog-content\") pod \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\" (UID: \"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731\") " Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.557166 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-utilities" (OuterVolumeSpecName: "utilities") pod "e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" (UID: "e78ba138-9b7d-4c7c-973e-d2ae6c2b4731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.564415 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-kube-api-access-2crz4" (OuterVolumeSpecName: "kube-api-access-2crz4") pod "e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" (UID: "e78ba138-9b7d-4c7c-973e-d2ae6c2b4731"). InnerVolumeSpecName "kube-api-access-2crz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.611035 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" (UID: "e78ba138-9b7d-4c7c-973e-d2ae6c2b4731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.658069 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.658106 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.658121 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2crz4\" (UniqueName: \"kubernetes.io/projected/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731-kube-api-access-2crz4\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.848716 4823 generic.go:334] "Generic (PLEG): container finished" podID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerID="7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6" exitCode=0 Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.848760 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerDied","Data":"7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6"} Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.848794 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zc4mv" event={"ID":"e78ba138-9b7d-4c7c-973e-d2ae6c2b4731","Type":"ContainerDied","Data":"916d6e3b2d3274ea9e7f5b526cf36cdd794aa728a069c16f7e4f0814d10c3292"} Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.848807 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zc4mv" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.848816 4823 scope.go:117] "RemoveContainer" containerID="7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.870945 4823 scope.go:117] "RemoveContainer" containerID="6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.887496 4823 scope.go:117] "RemoveContainer" containerID="b1735f77a7f985a5e61b511d0e1b76fa1867ef62e35a2c9b45649a452a43a8dc" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.894293 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zc4mv"] Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.905053 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zc4mv"] Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.943675 4823 scope.go:117] "RemoveContainer" containerID="0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.980160 4823 scope.go:117] "RemoveContainer" containerID="7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6" Jan 21 18:15:29 crc kubenswrapper[4823]: E0121 18:15:29.980612 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6\": container with ID starting with 7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6 not found: ID does not exist" containerID="7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.980655 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6"} err="failed to get container status \"7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6\": rpc error: code = NotFound desc = could not find container \"7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6\": container with ID starting with 7ef86906e625bd2bc89879831bbd4872fa9127a1e3402956d4aca3031bfc67a6 not found: ID does not exist" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.980688 4823 scope.go:117] "RemoveContainer" containerID="6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a" Jan 21 18:15:29 crc kubenswrapper[4823]: E0121 18:15:29.981111 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a\": container with ID starting with 6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a not found: ID does not exist" containerID="6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.981170 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a"} err="failed to get container status \"6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a\": rpc error: code = NotFound desc = could not find container \"6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a\": container with ID starting with 6bb36027d0f8b87ab8c711b7cccc7ac6eb166a2e0bc81ffe58cd04375afbc64a not found: ID does not exist" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.981202 4823 scope.go:117] "RemoveContainer" containerID="0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257" Jan 21 18:15:29 crc kubenswrapper[4823]: E0121 18:15:29.981556 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257\": container with ID starting with 0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257 not found: ID does not exist" containerID="0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257" Jan 21 18:15:29 crc kubenswrapper[4823]: I0121 18:15:29.981608 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257"} err="failed to get container status \"0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257\": rpc error: code = NotFound desc = could not find container \"0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257\": container with ID starting with 0e01c71437e832bc787d675eaa1394bbcc6ed0a399deb676441d1016de317257 not found: ID does not exist" Jan 21 18:15:31 crc kubenswrapper[4823]: I0121 18:15:31.358284 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" path="/var/lib/kubelet/pods/e78ba138-9b7d-4c7c-973e-d2ae6c2b4731/volumes" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.260592 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tctxc"] Jan 21 18:15:42 crc kubenswrapper[4823]: E0121 18:15:42.261543 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="registry-server" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.261556 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="registry-server" Jan 21 18:15:42 crc kubenswrapper[4823]: E0121 18:15:42.261569 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="extract-content" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.261575 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="extract-content" Jan 21 18:15:42 crc kubenswrapper[4823]: E0121 18:15:42.261591 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="extract-utilities" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.261597 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="extract-utilities" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.261807 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78ba138-9b7d-4c7c-973e-d2ae6c2b4731" containerName="registry-server" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.263881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.273361 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tctxc"] Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.322978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-utilities\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.323182 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-catalog-content\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.323216 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqxc\" (UniqueName: \"kubernetes.io/projected/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-kube-api-access-5fqxc\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.426979 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-catalog-content\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.427314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqxc\" (UniqueName: \"kubernetes.io/projected/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-kube-api-access-5fqxc\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.427598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-utilities\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.428059 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-utilities\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.428084 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-catalog-content\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.451865 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqxc\" (UniqueName: \"kubernetes.io/projected/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-kube-api-access-5fqxc\") pod \"certified-operators-tctxc\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:42 crc kubenswrapper[4823]: I0121 18:15:42.589616 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:43 crc kubenswrapper[4823]: I0121 18:15:43.126573 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tctxc"] Jan 21 18:15:43 crc kubenswrapper[4823]: I0121 18:15:43.983988 4823 generic.go:334] "Generic (PLEG): container finished" podID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerID="46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907" exitCode=0 Jan 21 18:15:43 crc kubenswrapper[4823]: I0121 18:15:43.984106 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerDied","Data":"46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907"} Jan 21 18:15:43 crc kubenswrapper[4823]: I0121 18:15:43.984284 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerStarted","Data":"36aca4cdda9fc796840b169f18b08166d60f0fb768544b0c45c6a4d7a75248a9"} Jan 21 18:15:45 crc kubenswrapper[4823]: I0121 18:15:45.000466 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerStarted","Data":"6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8"} Jan 21 18:15:45 crc kubenswrapper[4823]: I0121 18:15:45.070816 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:15:45 crc kubenswrapper[4823]: I0121 18:15:45.070877 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:15:46 crc kubenswrapper[4823]: I0121 18:15:46.009557 4823 generic.go:334] "Generic (PLEG): container finished" podID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerID="6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8" exitCode=0 Jan 21 18:15:46 crc kubenswrapper[4823]: I0121 18:15:46.009592 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerDied","Data":"6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8"} Jan 21 18:15:47 crc kubenswrapper[4823]: I0121 18:15:47.023744 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerStarted","Data":"6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7"} Jan 21 18:15:47 crc kubenswrapper[4823]: I0121 18:15:47.053720 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tctxc" podStartSLOduration=2.5971408499999997 podStartE2EDuration="5.053700622s" podCreationTimestamp="2026-01-21 18:15:42 +0000 UTC" firstStartedPulling="2026-01-21 18:15:43.987535828 +0000 UTC m=+3544.913666688" lastFinishedPulling="2026-01-21 18:15:46.44409558 +0000 UTC m=+3547.370226460" observedRunningTime="2026-01-21 18:15:47.045273414 +0000 UTC m=+3547.971404274" watchObservedRunningTime="2026-01-21 18:15:47.053700622 +0000 UTC m=+3547.979831482" Jan 21 18:15:52 crc kubenswrapper[4823]: I0121 18:15:52.590628 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:52 crc kubenswrapper[4823]: I0121 18:15:52.591203 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:52 crc kubenswrapper[4823]: I0121 18:15:52.636576 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:53 crc kubenswrapper[4823]: I0121 18:15:53.182550 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:53 crc kubenswrapper[4823]: I0121 18:15:53.272677 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tctxc"] Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.128804 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tctxc" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="registry-server" containerID="cri-o://6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7" gracePeriod=2 Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.619644 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.719397 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-utilities\") pod \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.719457 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqxc\" (UniqueName: \"kubernetes.io/projected/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-kube-api-access-5fqxc\") pod \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.719670 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-catalog-content\") pod \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\" (UID: \"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0\") " Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.729702 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-utilities" (OuterVolumeSpecName: "utilities") pod "d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" (UID: "d915a33a-8cc6-4ef0-b7b0-0302fa4482e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.733887 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-kube-api-access-5fqxc" (OuterVolumeSpecName: "kube-api-access-5fqxc") pod "d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" (UID: "d915a33a-8cc6-4ef0-b7b0-0302fa4482e0"). InnerVolumeSpecName "kube-api-access-5fqxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.765864 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" (UID: "d915a33a-8cc6-4ef0-b7b0-0302fa4482e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.822863 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.822893 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:55 crc kubenswrapper[4823]: I0121 18:15:55.822903 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqxc\" (UniqueName: \"kubernetes.io/projected/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0-kube-api-access-5fqxc\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.140468 4823 generic.go:334] "Generic (PLEG): container finished" podID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerID="6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7" exitCode=0 Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.140520 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerDied","Data":"6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7"} Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.140549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tctxc" event={"ID":"d915a33a-8cc6-4ef0-b7b0-0302fa4482e0","Type":"ContainerDied","Data":"36aca4cdda9fc796840b169f18b08166d60f0fb768544b0c45c6a4d7a75248a9"} Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.140569 4823 scope.go:117] "RemoveContainer" containerID="6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.140717 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tctxc" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.164222 4823 scope.go:117] "RemoveContainer" containerID="6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.190270 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tctxc"] Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.202987 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tctxc"] Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.221623 4823 scope.go:117] "RemoveContainer" containerID="46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.255110 4823 scope.go:117] "RemoveContainer" containerID="6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7" Jan 21 18:15:56 crc kubenswrapper[4823]: E0121 18:15:56.255887 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7\": container with ID starting with 6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7 not found: ID does not exist" containerID="6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.255946 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7"} err="failed to get container status \"6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7\": rpc error: code = NotFound desc = could not find container \"6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7\": container with ID starting with 6484fb835e52dadc72663f271f6401aed137211c6bf66ac2ebcd8e865039f3f7 not found: ID does not exist" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.255976 4823 scope.go:117] "RemoveContainer" containerID="6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8" Jan 21 18:15:56 crc kubenswrapper[4823]: E0121 18:15:56.256498 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8\": container with ID starting with 6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8 not found: ID does not exist" containerID="6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.256527 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8"} err="failed to get container status \"6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8\": rpc error: code = NotFound desc = could not find container \"6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8\": container with ID starting with 6d07f0275e23cfc618551ba740d6c9e0bae6737346f3cf232c1f3a5a1ce1e1f8 not found: ID does not exist" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.256547 4823 scope.go:117] "RemoveContainer" containerID="46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907" Jan 21 18:15:56 crc kubenswrapper[4823]: E0121 18:15:56.256884 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907\": container with ID starting with 46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907 not found: ID does not exist" containerID="46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907" Jan 21 18:15:56 crc kubenswrapper[4823]: I0121 18:15:56.256902 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907"} err="failed to get container status \"46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907\": rpc error: code = NotFound desc = could not find container \"46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907\": container with ID starting with 46bc57dd728174b6d01b7884e836f45e4cc49eae403518540faa1d6529114907 not found: ID does not exist" Jan 21 18:15:57 crc kubenswrapper[4823]: I0121 18:15:57.354993 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" path="/var/lib/kubelet/pods/d915a33a-8cc6-4ef0-b7b0-0302fa4482e0/volumes" Jan 21 18:16:15 crc kubenswrapper[4823]: I0121 18:16:15.070222 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:16:15 crc kubenswrapper[4823]: I0121 18:16:15.070772 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.070715 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.071327 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.071683 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.072484 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.072530 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" gracePeriod=600 Jan 21 18:16:45 crc kubenswrapper[4823]: E0121 18:16:45.202071 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.634037 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" exitCode=0 Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.634082 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac"} Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.634114 4823 scope.go:117] "RemoveContainer" containerID="ee845ec4eaabe33255a9e404193ffa91dda969aea60e5dee8732e82716b9a4da" Jan 21 18:16:45 crc kubenswrapper[4823]: I0121 18:16:45.634729 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:16:45 crc kubenswrapper[4823]: E0121 18:16:45.635037 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:17:00 crc kubenswrapper[4823]: I0121 18:17:00.343541 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:17:00 crc kubenswrapper[4823]: E0121 18:17:00.344530 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:17:11 crc kubenswrapper[4823]: I0121 18:17:11.343813 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:17:11 crc kubenswrapper[4823]: E0121 18:17:11.346650 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:17:23 crc kubenswrapper[4823]: I0121 18:17:23.344281 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:17:23 crc kubenswrapper[4823]: E0121 18:17:23.346116 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:17:24 crc kubenswrapper[4823]: E0121 18:17:24.867673 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 21 18:17:38 crc kubenswrapper[4823]: I0121 18:17:38.344748 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:17:38 crc kubenswrapper[4823]: E0121 18:17:38.346095 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:17:52 crc kubenswrapper[4823]: I0121 18:17:52.343146 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:17:52 crc kubenswrapper[4823]: E0121 18:17:52.345077 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:18:04 crc kubenswrapper[4823]: I0121 18:18:04.343250 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:18:04 crc kubenswrapper[4823]: E0121 18:18:04.344190 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:18:15 crc kubenswrapper[4823]: I0121 18:18:15.343628 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:18:15 crc kubenswrapper[4823]: E0121 18:18:15.344504 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:18:26 crc kubenswrapper[4823]: I0121 18:18:26.344511 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:18:26 crc kubenswrapper[4823]: E0121 18:18:26.345379 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:18:38 crc kubenswrapper[4823]: I0121 18:18:38.344039 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:18:38 crc kubenswrapper[4823]: E0121 18:18:38.345026 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:18:52 crc kubenswrapper[4823]: I0121 18:18:52.344101 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:18:52 crc kubenswrapper[4823]: E0121 18:18:52.344844 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:19:04 crc kubenswrapper[4823]: I0121 18:19:04.343723 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:19:04 crc kubenswrapper[4823]: E0121 18:19:04.344655 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:19:18 crc kubenswrapper[4823]: I0121 18:19:18.343638 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:19:18 crc kubenswrapper[4823]: E0121 18:19:18.344638 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:19:30 crc kubenswrapper[4823]: I0121 18:19:30.343584 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:19:30 crc kubenswrapper[4823]: E0121 18:19:30.344383 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:19:42 crc kubenswrapper[4823]: I0121 18:19:42.344401 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:19:42 crc kubenswrapper[4823]: E0121 18:19:42.345266 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:19:55 crc kubenswrapper[4823]: I0121 18:19:55.343819 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:19:55 crc kubenswrapper[4823]: E0121 18:19:55.344702 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:20:08 crc kubenswrapper[4823]: I0121 18:20:08.344216 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:20:08 crc kubenswrapper[4823]: E0121 18:20:08.345380 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:20:19 crc kubenswrapper[4823]: I0121 18:20:19.344741 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:20:19 crc kubenswrapper[4823]: E0121 18:20:19.345374 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:20:34 crc kubenswrapper[4823]: I0121 18:20:34.344496 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:20:34 crc kubenswrapper[4823]: E0121 18:20:34.345452 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:20:46 crc kubenswrapper[4823]: I0121 18:20:46.343939 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:20:46 crc kubenswrapper[4823]: E0121 18:20:46.344708 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:21:01 crc kubenswrapper[4823]: I0121 18:21:01.344208 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:21:01 crc kubenswrapper[4823]: E0121 18:21:01.345059 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.049591 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2722p"] Jan 21 18:21:08 crc kubenswrapper[4823]: E0121 18:21:08.050768 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="extract-content" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.050789 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="extract-content" Jan 21 18:21:08 crc kubenswrapper[4823]: E0121 18:21:08.050820 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="registry-server" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.050829 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="registry-server" Jan 21 18:21:08 crc kubenswrapper[4823]: E0121 18:21:08.050883 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="extract-utilities" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.050893 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="extract-utilities" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.051186 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d915a33a-8cc6-4ef0-b7b0-0302fa4482e0" containerName="registry-server" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.053371 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.060628 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2722p"] Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.148013 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwfk2\" (UniqueName: \"kubernetes.io/projected/90d33858-f657-4d26-9b73-fe710008bcb9-kube-api-access-wwfk2\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.148054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-utilities\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.148237 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-catalog-content\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.250285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwfk2\" (UniqueName: \"kubernetes.io/projected/90d33858-f657-4d26-9b73-fe710008bcb9-kube-api-access-wwfk2\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.250330 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-utilities\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.250387 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-catalog-content\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.250951 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-catalog-content\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.250950 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-utilities\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.285948 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwfk2\" (UniqueName: \"kubernetes.io/projected/90d33858-f657-4d26-9b73-fe710008bcb9-kube-api-access-wwfk2\") pod \"redhat-marketplace-2722p\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.424469 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:08 crc kubenswrapper[4823]: I0121 18:21:08.963245 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2722p"] Jan 21 18:21:09 crc kubenswrapper[4823]: I0121 18:21:09.949948 4823 generic.go:334] "Generic (PLEG): container finished" podID="90d33858-f657-4d26-9b73-fe710008bcb9" containerID="96f3179bbb0c9324f6f82e6bbb66591233bc49b66a80919cbc657b52b1f45454" exitCode=0 Jan 21 18:21:09 crc kubenswrapper[4823]: I0121 18:21:09.950075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerDied","Data":"96f3179bbb0c9324f6f82e6bbb66591233bc49b66a80919cbc657b52b1f45454"} Jan 21 18:21:09 crc kubenswrapper[4823]: I0121 18:21:09.950272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerStarted","Data":"7a7592e35a3489d8e4b966c4d9cb5218060d37c4c4aa0873bbaf49dc00fe53aa"} Jan 21 18:21:09 crc kubenswrapper[4823]: I0121 18:21:09.952933 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:21:10 crc kubenswrapper[4823]: I0121 18:21:10.960056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerStarted","Data":"c826559eea2099809e507738c857bb9e716c2b79b5566f5e7e1290d067b72feb"} Jan 21 18:21:11 crc kubenswrapper[4823]: I0121 18:21:11.972244 4823 generic.go:334] "Generic (PLEG): container finished" podID="90d33858-f657-4d26-9b73-fe710008bcb9" containerID="c826559eea2099809e507738c857bb9e716c2b79b5566f5e7e1290d067b72feb" exitCode=0 Jan 21 18:21:11 crc kubenswrapper[4823]: I0121 18:21:11.972344 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerDied","Data":"c826559eea2099809e507738c857bb9e716c2b79b5566f5e7e1290d067b72feb"} Jan 21 18:21:12 crc kubenswrapper[4823]: I0121 18:21:12.983941 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerStarted","Data":"f0d7a404b52c501c6c9b9e96eca06484e385e69a86b21ea1060c1fce5c188f88"} Jan 21 18:21:13 crc kubenswrapper[4823]: I0121 18:21:13.013315 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2722p" podStartSLOduration=2.612167938 podStartE2EDuration="5.013291954s" podCreationTimestamp="2026-01-21 18:21:08 +0000 UTC" firstStartedPulling="2026-01-21 18:21:09.952674477 +0000 UTC m=+3870.878805337" lastFinishedPulling="2026-01-21 18:21:12.353798503 +0000 UTC m=+3873.279929353" observedRunningTime="2026-01-21 18:21:13.002039677 +0000 UTC m=+3873.928170537" watchObservedRunningTime="2026-01-21 18:21:13.013291954 +0000 UTC m=+3873.939422814" Jan 21 18:21:14 crc kubenswrapper[4823]: I0121 18:21:14.343941 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:21:14 crc kubenswrapper[4823]: E0121 18:21:14.344444 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:21:18 crc kubenswrapper[4823]: I0121 18:21:18.425111 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:18 crc kubenswrapper[4823]: I0121 18:21:18.425644 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:18 crc kubenswrapper[4823]: I0121 18:21:18.478022 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:19 crc kubenswrapper[4823]: I0121 18:21:19.083770 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:19 crc kubenswrapper[4823]: I0121 18:21:19.141165 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2722p"] Jan 21 18:21:21 crc kubenswrapper[4823]: I0121 18:21:21.055751 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2722p" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="registry-server" containerID="cri-o://f0d7a404b52c501c6c9b9e96eca06484e385e69a86b21ea1060c1fce5c188f88" gracePeriod=2 Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.073274 4823 generic.go:334] "Generic (PLEG): container finished" podID="90d33858-f657-4d26-9b73-fe710008bcb9" containerID="f0d7a404b52c501c6c9b9e96eca06484e385e69a86b21ea1060c1fce5c188f88" exitCode=0 Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.073597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerDied","Data":"f0d7a404b52c501c6c9b9e96eca06484e385e69a86b21ea1060c1fce5c188f88"} Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.302343 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.494702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwfk2\" (UniqueName: \"kubernetes.io/projected/90d33858-f657-4d26-9b73-fe710008bcb9-kube-api-access-wwfk2\") pod \"90d33858-f657-4d26-9b73-fe710008bcb9\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.494791 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-utilities\") pod \"90d33858-f657-4d26-9b73-fe710008bcb9\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.494969 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-catalog-content\") pod \"90d33858-f657-4d26-9b73-fe710008bcb9\" (UID: \"90d33858-f657-4d26-9b73-fe710008bcb9\") " Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.496027 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-utilities" (OuterVolumeSpecName: "utilities") pod "90d33858-f657-4d26-9b73-fe710008bcb9" (UID: "90d33858-f657-4d26-9b73-fe710008bcb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.501216 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d33858-f657-4d26-9b73-fe710008bcb9-kube-api-access-wwfk2" (OuterVolumeSpecName: "kube-api-access-wwfk2") pod "90d33858-f657-4d26-9b73-fe710008bcb9" (UID: "90d33858-f657-4d26-9b73-fe710008bcb9"). InnerVolumeSpecName "kube-api-access-wwfk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.521608 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90d33858-f657-4d26-9b73-fe710008bcb9" (UID: "90d33858-f657-4d26-9b73-fe710008bcb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.597749 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwfk2\" (UniqueName: \"kubernetes.io/projected/90d33858-f657-4d26-9b73-fe710008bcb9-kube-api-access-wwfk2\") on node \"crc\" DevicePath \"\"" Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.597787 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:21:22 crc kubenswrapper[4823]: I0121 18:21:22.597798 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d33858-f657-4d26-9b73-fe710008bcb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.086073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2722p" event={"ID":"90d33858-f657-4d26-9b73-fe710008bcb9","Type":"ContainerDied","Data":"7a7592e35a3489d8e4b966c4d9cb5218060d37c4c4aa0873bbaf49dc00fe53aa"} Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.086112 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2722p" Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.086375 4823 scope.go:117] "RemoveContainer" containerID="f0d7a404b52c501c6c9b9e96eca06484e385e69a86b21ea1060c1fce5c188f88" Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.109163 4823 scope.go:117] "RemoveContainer" containerID="c826559eea2099809e507738c857bb9e716c2b79b5566f5e7e1290d067b72feb" Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.118003 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2722p"] Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.129026 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2722p"] Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.142214 4823 scope.go:117] "RemoveContainer" containerID="96f3179bbb0c9324f6f82e6bbb66591233bc49b66a80919cbc657b52b1f45454" Jan 21 18:21:23 crc kubenswrapper[4823]: I0121 18:21:23.354764 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" path="/var/lib/kubelet/pods/90d33858-f657-4d26-9b73-fe710008bcb9/volumes" Jan 21 18:21:26 crc kubenswrapper[4823]: I0121 18:21:26.343736 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:21:26 crc kubenswrapper[4823]: E0121 18:21:26.344484 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:21:37 crc kubenswrapper[4823]: I0121 18:21:37.344025 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:21:37 crc kubenswrapper[4823]: E0121 18:21:37.345293 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:21:51 crc kubenswrapper[4823]: I0121 18:21:51.343752 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:21:52 crc kubenswrapper[4823]: I0121 18:21:52.389101 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"f2dba2d865679c2afa01ff437330fe27d537670932c55f067b073a0260c810e4"} Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.681702 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-psnqn"] Jan 21 18:22:17 crc kubenswrapper[4823]: E0121 18:22:17.682759 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="extract-utilities" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.682812 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="extract-utilities" Jan 21 18:22:17 crc kubenswrapper[4823]: E0121 18:22:17.682870 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="registry-server" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.682878 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="registry-server" Jan 21 18:22:17 crc kubenswrapper[4823]: E0121 18:22:17.682896 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="extract-content" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.682903 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="extract-content" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.683148 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d33858-f657-4d26-9b73-fe710008bcb9" containerName="registry-server" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.684954 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.702297 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psnqn"] Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.855293 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-utilities\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.855495 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-catalog-content\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.855606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2bzg\" (UniqueName: \"kubernetes.io/projected/73404250-9941-4d8f-aaaa-b6680df2536a-kube-api-access-b2bzg\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.957057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2bzg\" (UniqueName: \"kubernetes.io/projected/73404250-9941-4d8f-aaaa-b6680df2536a-kube-api-access-b2bzg\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.957631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-utilities\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.957945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-catalog-content\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.958212 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-utilities\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.958461 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-catalog-content\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:17 crc kubenswrapper[4823]: I0121 18:22:17.995308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2bzg\" (UniqueName: \"kubernetes.io/projected/73404250-9941-4d8f-aaaa-b6680df2536a-kube-api-access-b2bzg\") pod \"redhat-operators-psnqn\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:18 crc kubenswrapper[4823]: I0121 18:22:18.004391 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:18 crc kubenswrapper[4823]: I0121 18:22:18.584034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psnqn"] Jan 21 18:22:18 crc kubenswrapper[4823]: I0121 18:22:18.738059 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerStarted","Data":"8517bc6dc8991ab22230854f680bba8f14e4c9706de3df5dd5c50759c5590450"} Jan 21 18:22:19 crc kubenswrapper[4823]: I0121 18:22:19.746193 4823 generic.go:334] "Generic (PLEG): container finished" podID="73404250-9941-4d8f-aaaa-b6680df2536a" containerID="692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4" exitCode=0 Jan 21 18:22:19 crc kubenswrapper[4823]: I0121 18:22:19.746245 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerDied","Data":"692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4"} Jan 21 18:22:22 crc kubenswrapper[4823]: I0121 18:22:22.771748 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerStarted","Data":"0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25"} Jan 21 18:22:23 crc kubenswrapper[4823]: I0121 18:22:23.787251 4823 generic.go:334] "Generic (PLEG): container finished" podID="73404250-9941-4d8f-aaaa-b6680df2536a" containerID="0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25" exitCode=0 Jan 21 18:22:23 crc kubenswrapper[4823]: I0121 18:22:23.787637 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerDied","Data":"0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25"} Jan 21 18:22:28 crc kubenswrapper[4823]: I0121 18:22:28.831867 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerStarted","Data":"33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3"} Jan 21 18:22:28 crc kubenswrapper[4823]: I0121 18:22:28.857285 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-psnqn" podStartSLOduration=3.69733057 podStartE2EDuration="11.857263979s" podCreationTimestamp="2026-01-21 18:22:17 +0000 UTC" firstStartedPulling="2026-01-21 18:22:19.748215452 +0000 UTC m=+3940.674346322" lastFinishedPulling="2026-01-21 18:22:27.908148881 +0000 UTC m=+3948.834279731" observedRunningTime="2026-01-21 18:22:28.847257881 +0000 UTC m=+3949.773388741" watchObservedRunningTime="2026-01-21 18:22:28.857263979 +0000 UTC m=+3949.783394829" Jan 21 18:22:38 crc kubenswrapper[4823]: I0121 18:22:38.005698 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:38 crc kubenswrapper[4823]: I0121 18:22:38.006336 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:38 crc kubenswrapper[4823]: I0121 18:22:38.066724 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:38 crc kubenswrapper[4823]: I0121 18:22:38.967618 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:39 crc kubenswrapper[4823]: I0121 18:22:39.017062 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psnqn"] Jan 21 18:22:40 crc kubenswrapper[4823]: I0121 18:22:40.938775 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-psnqn" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="registry-server" containerID="cri-o://33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3" gracePeriod=2 Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.747690 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.836239 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2bzg\" (UniqueName: \"kubernetes.io/projected/73404250-9941-4d8f-aaaa-b6680df2536a-kube-api-access-b2bzg\") pod \"73404250-9941-4d8f-aaaa-b6680df2536a\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.836380 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-utilities\") pod \"73404250-9941-4d8f-aaaa-b6680df2536a\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.836457 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-catalog-content\") pod \"73404250-9941-4d8f-aaaa-b6680df2536a\" (UID: \"73404250-9941-4d8f-aaaa-b6680df2536a\") " Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.837386 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-utilities" (OuterVolumeSpecName: "utilities") pod "73404250-9941-4d8f-aaaa-b6680df2536a" (UID: "73404250-9941-4d8f-aaaa-b6680df2536a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.842696 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73404250-9941-4d8f-aaaa-b6680df2536a-kube-api-access-b2bzg" (OuterVolumeSpecName: "kube-api-access-b2bzg") pod "73404250-9941-4d8f-aaaa-b6680df2536a" (UID: "73404250-9941-4d8f-aaaa-b6680df2536a"). InnerVolumeSpecName "kube-api-access-b2bzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.939181 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2bzg\" (UniqueName: \"kubernetes.io/projected/73404250-9941-4d8f-aaaa-b6680df2536a-kube-api-access-b2bzg\") on node \"crc\" DevicePath \"\"" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.939217 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.949063 4823 generic.go:334] "Generic (PLEG): container finished" podID="73404250-9941-4d8f-aaaa-b6680df2536a" containerID="33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3" exitCode=0 Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.949106 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerDied","Data":"33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3"} Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.949134 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psnqn" event={"ID":"73404250-9941-4d8f-aaaa-b6680df2536a","Type":"ContainerDied","Data":"8517bc6dc8991ab22230854f680bba8f14e4c9706de3df5dd5c50759c5590450"} Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.949153 4823 scope.go:117] "RemoveContainer" containerID="33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.949282 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psnqn" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.967361 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73404250-9941-4d8f-aaaa-b6680df2536a" (UID: "73404250-9941-4d8f-aaaa-b6680df2536a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:22:41 crc kubenswrapper[4823]: I0121 18:22:41.975201 4823 scope.go:117] "RemoveContainer" containerID="0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.004230 4823 scope.go:117] "RemoveContainer" containerID="692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.040727 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73404250-9941-4d8f-aaaa-b6680df2536a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.063948 4823 scope.go:117] "RemoveContainer" containerID="33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3" Jan 21 18:22:42 crc kubenswrapper[4823]: E0121 18:22:42.069813 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3\": container with ID starting with 33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3 not found: ID does not exist" containerID="33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.069868 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3"} err="failed to get container status \"33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3\": rpc error: code = NotFound desc = could not find container \"33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3\": container with ID starting with 33638e98068361df85103ac506a241a63044e39c08da9b6df73ea2729dc56be3 not found: ID does not exist" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.069896 4823 scope.go:117] "RemoveContainer" containerID="0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25" Jan 21 18:22:42 crc kubenswrapper[4823]: E0121 18:22:42.070948 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25\": container with ID starting with 0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25 not found: ID does not exist" containerID="0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.071082 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25"} err="failed to get container status \"0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25\": rpc error: code = NotFound desc = could not find container \"0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25\": container with ID starting with 0f2a2dc2c5f15733f60edc0e37a48349a5f12f80e2d9cdf32a0b3c6f20c76a25 not found: ID does not exist" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.071192 4823 scope.go:117] "RemoveContainer" containerID="692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4" Jan 21 18:22:42 crc kubenswrapper[4823]: E0121 18:22:42.071622 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4\": container with ID starting with 692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4 not found: ID does not exist" containerID="692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.071719 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4"} err="failed to get container status \"692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4\": rpc error: code = NotFound desc = could not find container \"692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4\": container with ID starting with 692692c585fa052cf3973ff42fad49a146b6cf36393a874a795286447b08f9a4 not found: ID does not exist" Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.285798 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psnqn"] Jan 21 18:22:42 crc kubenswrapper[4823]: I0121 18:22:42.295847 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-psnqn"] Jan 21 18:22:43 crc kubenswrapper[4823]: I0121 18:22:43.355217 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" path="/var/lib/kubelet/pods/73404250-9941-4d8f-aaaa-b6680df2536a/volumes" Jan 21 18:24:15 crc kubenswrapper[4823]: I0121 18:24:15.070480 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:24:15 crc kubenswrapper[4823]: I0121 18:24:15.071248 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:24:45 crc kubenswrapper[4823]: I0121 18:24:45.071423 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:24:45 crc kubenswrapper[4823]: I0121 18:24:45.072063 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.071065 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.071628 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.071674 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.072490 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2dba2d865679c2afa01ff437330fe27d537670932c55f067b073a0260c810e4"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.072550 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://f2dba2d865679c2afa01ff437330fe27d537670932c55f067b073a0260c810e4" gracePeriod=600 Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.320807 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="f2dba2d865679c2afa01ff437330fe27d537670932c55f067b073a0260c810e4" exitCode=0 Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.320874 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"f2dba2d865679c2afa01ff437330fe27d537670932c55f067b073a0260c810e4"} Jan 21 18:25:15 crc kubenswrapper[4823]: I0121 18:25:15.321130 4823 scope.go:117] "RemoveContainer" containerID="0e6f77b5f8c279363e489c703d21245390293efa238c6fb5900fb162db26a9ac" Jan 21 18:25:16 crc kubenswrapper[4823]: I0121 18:25:16.342007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed"} Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.577521 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2vt4v"] Jan 21 18:26:01 crc kubenswrapper[4823]: E0121 18:26:01.578439 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="registry-server" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.578453 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="registry-server" Jan 21 18:26:01 crc kubenswrapper[4823]: E0121 18:26:01.578505 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="extract-content" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.578513 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="extract-content" Jan 21 18:26:01 crc kubenswrapper[4823]: E0121 18:26:01.578531 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="extract-utilities" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.578538 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="extract-utilities" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.578723 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="73404250-9941-4d8f-aaaa-b6680df2536a" containerName="registry-server" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.580177 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.590228 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vt4v"] Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.670500 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndhrt\" (UniqueName: \"kubernetes.io/projected/940db9a2-6944-4ffe-a14f-fd7925061d2a-kube-api-access-ndhrt\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.670601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940db9a2-6944-4ffe-a14f-fd7925061d2a-utilities\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.670618 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940db9a2-6944-4ffe-a14f-fd7925061d2a-catalog-content\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.776582 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndhrt\" (UniqueName: \"kubernetes.io/projected/940db9a2-6944-4ffe-a14f-fd7925061d2a-kube-api-access-ndhrt\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.776697 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940db9a2-6944-4ffe-a14f-fd7925061d2a-utilities\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.776718 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940db9a2-6944-4ffe-a14f-fd7925061d2a-catalog-content\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.777328 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940db9a2-6944-4ffe-a14f-fd7925061d2a-catalog-content\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.777892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940db9a2-6944-4ffe-a14f-fd7925061d2a-utilities\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.799501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndhrt\" (UniqueName: \"kubernetes.io/projected/940db9a2-6944-4ffe-a14f-fd7925061d2a-kube-api-access-ndhrt\") pod \"certified-operators-2vt4v\" (UID: \"940db9a2-6944-4ffe-a14f-fd7925061d2a\") " pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:01 crc kubenswrapper[4823]: I0121 18:26:01.901612 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:02 crc kubenswrapper[4823]: I0121 18:26:02.518074 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vt4v"] Jan 21 18:26:02 crc kubenswrapper[4823]: I0121 18:26:02.794153 4823 generic.go:334] "Generic (PLEG): container finished" podID="940db9a2-6944-4ffe-a14f-fd7925061d2a" containerID="6089f5e82832629366f60af8b7528dd6fd1149792f8c6a71fd8723b418fa93e9" exitCode=0 Jan 21 18:26:02 crc kubenswrapper[4823]: I0121 18:26:02.794340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vt4v" event={"ID":"940db9a2-6944-4ffe-a14f-fd7925061d2a","Type":"ContainerDied","Data":"6089f5e82832629366f60af8b7528dd6fd1149792f8c6a71fd8723b418fa93e9"} Jan 21 18:26:02 crc kubenswrapper[4823]: I0121 18:26:02.794459 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vt4v" event={"ID":"940db9a2-6944-4ffe-a14f-fd7925061d2a","Type":"ContainerStarted","Data":"fe9f64c56383ceeecc547af13f1889cfdf88597043a4eff201e2dfda5072bd54"} Jan 21 18:26:07 crc kubenswrapper[4823]: I0121 18:26:07.842226 4823 generic.go:334] "Generic (PLEG): container finished" podID="940db9a2-6944-4ffe-a14f-fd7925061d2a" containerID="a574a537d6529b6f2baf0ca3eff57ff2342c090eeb113f933360edd0e862cfff" exitCode=0 Jan 21 18:26:07 crc kubenswrapper[4823]: I0121 18:26:07.842712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vt4v" event={"ID":"940db9a2-6944-4ffe-a14f-fd7925061d2a","Type":"ContainerDied","Data":"a574a537d6529b6f2baf0ca3eff57ff2342c090eeb113f933360edd0e862cfff"} Jan 21 18:26:09 crc kubenswrapper[4823]: I0121 18:26:09.863444 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vt4v" event={"ID":"940db9a2-6944-4ffe-a14f-fd7925061d2a","Type":"ContainerStarted","Data":"532d7749975a9f5ac93695e737b4e44b26cb3851d1b628cf9a74228c33e47b85"} Jan 21 18:26:09 crc kubenswrapper[4823]: I0121 18:26:09.898997 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2vt4v" podStartSLOduration=2.674484054 podStartE2EDuration="8.898974281s" podCreationTimestamp="2026-01-21 18:26:01 +0000 UTC" firstStartedPulling="2026-01-21 18:26:02.796235797 +0000 UTC m=+4163.722366657" lastFinishedPulling="2026-01-21 18:26:09.020726024 +0000 UTC m=+4169.946856884" observedRunningTime="2026-01-21 18:26:09.886613726 +0000 UTC m=+4170.812744586" watchObservedRunningTime="2026-01-21 18:26:09.898974281 +0000 UTC m=+4170.825105141" Jan 21 18:26:11 crc kubenswrapper[4823]: I0121 18:26:11.902489 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:11 crc kubenswrapper[4823]: I0121 18:26:11.902907 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:11 crc kubenswrapper[4823]: I0121 18:26:11.960281 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:21 crc kubenswrapper[4823]: I0121 18:26:21.953137 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2vt4v" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.051065 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vt4v"] Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.082541 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dj79q"] Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.083023 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dj79q" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="registry-server" containerID="cri-o://196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a" gracePeriod=2 Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.617072 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.705822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/f48059ea-e3f3-4b21-a108-ea45d532536a-kube-api-access-slj22\") pod \"f48059ea-e3f3-4b21-a108-ea45d532536a\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.706004 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-utilities\") pod \"f48059ea-e3f3-4b21-a108-ea45d532536a\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.706038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-catalog-content\") pod \"f48059ea-e3f3-4b21-a108-ea45d532536a\" (UID: \"f48059ea-e3f3-4b21-a108-ea45d532536a\") " Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.709047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-utilities" (OuterVolumeSpecName: "utilities") pod "f48059ea-e3f3-4b21-a108-ea45d532536a" (UID: "f48059ea-e3f3-4b21-a108-ea45d532536a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.724192 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48059ea-e3f3-4b21-a108-ea45d532536a-kube-api-access-slj22" (OuterVolumeSpecName: "kube-api-access-slj22") pod "f48059ea-e3f3-4b21-a108-ea45d532536a" (UID: "f48059ea-e3f3-4b21-a108-ea45d532536a"). InnerVolumeSpecName "kube-api-access-slj22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.808432 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.808477 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/f48059ea-e3f3-4b21-a108-ea45d532536a-kube-api-access-slj22\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.813509 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f48059ea-e3f3-4b21-a108-ea45d532536a" (UID: "f48059ea-e3f3-4b21-a108-ea45d532536a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.910390 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48059ea-e3f3-4b21-a108-ea45d532536a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.984029 4823 generic.go:334] "Generic (PLEG): container finished" podID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerID="196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a" exitCode=0 Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.984073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerDied","Data":"196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a"} Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.984102 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj79q" event={"ID":"f48059ea-e3f3-4b21-a108-ea45d532536a","Type":"ContainerDied","Data":"9a2a03f36200696e4e5c24744485ac4042dce918287ff19e1fa500090a26e5d0"} Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.984110 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj79q" Jan 21 18:26:22 crc kubenswrapper[4823]: I0121 18:26:22.984119 4823 scope.go:117] "RemoveContainer" containerID="196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.010595 4823 scope.go:117] "RemoveContainer" containerID="f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.043331 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dj79q"] Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.053967 4823 scope.go:117] "RemoveContainer" containerID="528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.060115 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dj79q"] Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.131881 4823 scope.go:117] "RemoveContainer" containerID="196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a" Jan 21 18:26:23 crc kubenswrapper[4823]: E0121 18:26:23.132427 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a\": container with ID starting with 196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a not found: ID does not exist" containerID="196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.132464 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a"} err="failed to get container status \"196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a\": rpc error: code = NotFound desc = could not find container \"196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a\": container with ID starting with 196aab24b557397578eab3733b2274cef460301e81e11678e51a2ebb6c83795a not found: ID does not exist" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.132489 4823 scope.go:117] "RemoveContainer" containerID="f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071" Jan 21 18:26:23 crc kubenswrapper[4823]: E0121 18:26:23.132823 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071\": container with ID starting with f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071 not found: ID does not exist" containerID="f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.132863 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071"} err="failed to get container status \"f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071\": rpc error: code = NotFound desc = could not find container \"f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071\": container with ID starting with f04dd8a573f34fcc5be2cc3bcb2d7037bc8e27d97dfd91ca651af9e3c9ce7071 not found: ID does not exist" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.132878 4823 scope.go:117] "RemoveContainer" containerID="528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c" Jan 21 18:26:23 crc kubenswrapper[4823]: E0121 18:26:23.135201 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c\": container with ID starting with 528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c not found: ID does not exist" containerID="528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.135351 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c"} err="failed to get container status \"528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c\": rpc error: code = NotFound desc = could not find container \"528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c\": container with ID starting with 528db0d98f91a097a7f81daa3dfc2ab644193f8f239b2aa0e3a00e52ed82db9c not found: ID does not exist" Jan 21 18:26:23 crc kubenswrapper[4823]: I0121 18:26:23.354312 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" path="/var/lib/kubelet/pods/f48059ea-e3f3-4b21-a108-ea45d532536a/volumes" Jan 21 18:27:15 crc kubenswrapper[4823]: I0121 18:27:15.070475 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:27:15 crc kubenswrapper[4823]: I0121 18:27:15.070999 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.394844 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9vbhn"] Jan 21 18:27:20 crc kubenswrapper[4823]: E0121 18:27:20.395772 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="extract-content" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.395787 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="extract-content" Jan 21 18:27:20 crc kubenswrapper[4823]: E0121 18:27:20.395808 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="registry-server" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.395816 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="registry-server" Jan 21 18:27:20 crc kubenswrapper[4823]: E0121 18:27:20.395870 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="extract-utilities" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.395881 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="extract-utilities" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.396095 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48059ea-e3f3-4b21-a108-ea45d532536a" containerName="registry-server" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.397839 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.421348 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9vbhn"] Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.552699 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-utilities\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.552773 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsbv\" (UniqueName: \"kubernetes.io/projected/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-kube-api-access-vtsbv\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.552993 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-catalog-content\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.654797 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-catalog-content\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.655149 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-utilities\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.655173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsbv\" (UniqueName: \"kubernetes.io/projected/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-kube-api-access-vtsbv\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.655567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-utilities\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.655621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-catalog-content\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.677682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsbv\" (UniqueName: \"kubernetes.io/projected/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-kube-api-access-vtsbv\") pod \"community-operators-9vbhn\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:20 crc kubenswrapper[4823]: I0121 18:27:20.720983 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:21 crc kubenswrapper[4823]: I0121 18:27:21.248255 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9vbhn"] Jan 21 18:27:21 crc kubenswrapper[4823]: I0121 18:27:21.494780 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerStarted","Data":"aa3957a77be2fdcabc2ed119892eab0def720ca9e8a9a1453ef59926aa561763"} Jan 21 18:27:21 crc kubenswrapper[4823]: I0121 18:27:21.495076 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerStarted","Data":"99c19468919a13cd95cc0f522fef999ea10149353f925313f5afdae25a3ed30d"} Jan 21 18:27:21 crc kubenswrapper[4823]: I0121 18:27:21.496737 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:27:22 crc kubenswrapper[4823]: I0121 18:27:22.505012 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerID="aa3957a77be2fdcabc2ed119892eab0def720ca9e8a9a1453ef59926aa561763" exitCode=0 Jan 21 18:27:22 crc kubenswrapper[4823]: I0121 18:27:22.505110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerDied","Data":"aa3957a77be2fdcabc2ed119892eab0def720ca9e8a9a1453ef59926aa561763"} Jan 21 18:27:23 crc kubenswrapper[4823]: I0121 18:27:23.520122 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerID="8de4df51122b2411e7ad17190d9e5c05fb671d2df1893a051d0e8b6be7078516" exitCode=0 Jan 21 18:27:23 crc kubenswrapper[4823]: I0121 18:27:23.520206 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerDied","Data":"8de4df51122b2411e7ad17190d9e5c05fb671d2df1893a051d0e8b6be7078516"} Jan 21 18:27:24 crc kubenswrapper[4823]: I0121 18:27:24.533643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerStarted","Data":"ccb99823f476f7e49aac5f8bff956b7ecc4c5bca8c47db2eb0be9d177e30a0f9"} Jan 21 18:27:24 crc kubenswrapper[4823]: I0121 18:27:24.557480 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9vbhn" podStartSLOduration=1.916285475 podStartE2EDuration="4.557461694s" podCreationTimestamp="2026-01-21 18:27:20 +0000 UTC" firstStartedPulling="2026-01-21 18:27:21.496516694 +0000 UTC m=+4242.422647554" lastFinishedPulling="2026-01-21 18:27:24.137692923 +0000 UTC m=+4245.063823773" observedRunningTime="2026-01-21 18:27:24.556236703 +0000 UTC m=+4245.482367573" watchObservedRunningTime="2026-01-21 18:27:24.557461694 +0000 UTC m=+4245.483592554" Jan 21 18:27:30 crc kubenswrapper[4823]: I0121 18:27:30.721682 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:30 crc kubenswrapper[4823]: I0121 18:27:30.722236 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:30 crc kubenswrapper[4823]: I0121 18:27:30.779264 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:31 crc kubenswrapper[4823]: I0121 18:27:31.663518 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:35 crc kubenswrapper[4823]: I0121 18:27:35.182646 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9vbhn"] Jan 21 18:27:35 crc kubenswrapper[4823]: I0121 18:27:35.183564 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9vbhn" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="registry-server" containerID="cri-o://ccb99823f476f7e49aac5f8bff956b7ecc4c5bca8c47db2eb0be9d177e30a0f9" gracePeriod=2 Jan 21 18:27:35 crc kubenswrapper[4823]: I0121 18:27:35.631252 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerID="ccb99823f476f7e49aac5f8bff956b7ecc4c5bca8c47db2eb0be9d177e30a0f9" exitCode=0 Jan 21 18:27:35 crc kubenswrapper[4823]: I0121 18:27:35.631314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerDied","Data":"ccb99823f476f7e49aac5f8bff956b7ecc4c5bca8c47db2eb0be9d177e30a0f9"} Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.073718 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.261808 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtsbv\" (UniqueName: \"kubernetes.io/projected/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-kube-api-access-vtsbv\") pod \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.261955 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-utilities\") pod \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.262012 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-catalog-content\") pod \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\" (UID: \"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc\") " Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.266061 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-utilities" (OuterVolumeSpecName: "utilities") pod "9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" (UID: "9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.277188 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-kube-api-access-vtsbv" (OuterVolumeSpecName: "kube-api-access-vtsbv") pod "9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" (UID: "9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc"). InnerVolumeSpecName "kube-api-access-vtsbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.313936 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" (UID: "9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.365330 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.365894 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.365997 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtsbv\" (UniqueName: \"kubernetes.io/projected/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc-kube-api-access-vtsbv\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.641496 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vbhn" event={"ID":"9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc","Type":"ContainerDied","Data":"99c19468919a13cd95cc0f522fef999ea10149353f925313f5afdae25a3ed30d"} Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.641549 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vbhn" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.641559 4823 scope.go:117] "RemoveContainer" containerID="ccb99823f476f7e49aac5f8bff956b7ecc4c5bca8c47db2eb0be9d177e30a0f9" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.662460 4823 scope.go:117] "RemoveContainer" containerID="8de4df51122b2411e7ad17190d9e5c05fb671d2df1893a051d0e8b6be7078516" Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.672103 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9vbhn"] Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.685923 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9vbhn"] Jan 21 18:27:36 crc kubenswrapper[4823]: I0121 18:27:36.693330 4823 scope.go:117] "RemoveContainer" containerID="aa3957a77be2fdcabc2ed119892eab0def720ca9e8a9a1453ef59926aa561763" Jan 21 18:27:37 crc kubenswrapper[4823]: I0121 18:27:37.364033 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" path="/var/lib/kubelet/pods/9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc/volumes" Jan 21 18:27:45 crc kubenswrapper[4823]: I0121 18:27:45.070700 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:27:45 crc kubenswrapper[4823]: I0121 18:27:45.071557 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:28:15 crc kubenswrapper[4823]: I0121 18:28:15.070935 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:28:15 crc kubenswrapper[4823]: I0121 18:28:15.071491 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:28:15 crc kubenswrapper[4823]: I0121 18:28:15.071541 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:28:15 crc kubenswrapper[4823]: I0121 18:28:15.072112 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:28:15 crc kubenswrapper[4823]: I0121 18:28:15.072177 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" gracePeriod=600 Jan 21 18:28:15 crc kubenswrapper[4823]: E0121 18:28:15.200731 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:28:16 crc kubenswrapper[4823]: I0121 18:28:16.024777 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" exitCode=0 Jan 21 18:28:16 crc kubenswrapper[4823]: I0121 18:28:16.024829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed"} Jan 21 18:28:16 crc kubenswrapper[4823]: I0121 18:28:16.024886 4823 scope.go:117] "RemoveContainer" containerID="f2dba2d865679c2afa01ff437330fe27d537670932c55f067b073a0260c810e4" Jan 21 18:28:16 crc kubenswrapper[4823]: I0121 18:28:16.025639 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:28:16 crc kubenswrapper[4823]: E0121 18:28:16.026135 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:28:27 crc kubenswrapper[4823]: I0121 18:28:27.350079 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:28:27 crc kubenswrapper[4823]: E0121 18:28:27.353298 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:28:41 crc kubenswrapper[4823]: I0121 18:28:41.345566 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:28:41 crc kubenswrapper[4823]: E0121 18:28:41.346726 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:28:54 crc kubenswrapper[4823]: I0121 18:28:54.344501 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:28:54 crc kubenswrapper[4823]: E0121 18:28:54.345372 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:29:09 crc kubenswrapper[4823]: I0121 18:29:09.354682 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:29:09 crc kubenswrapper[4823]: E0121 18:29:09.355468 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:29:20 crc kubenswrapper[4823]: I0121 18:29:20.344143 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:29:20 crc kubenswrapper[4823]: E0121 18:29:20.345209 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:29:35 crc kubenswrapper[4823]: I0121 18:29:35.344214 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:29:35 crc kubenswrapper[4823]: E0121 18:29:35.345027 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:29:50 crc kubenswrapper[4823]: I0121 18:29:50.343811 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:29:50 crc kubenswrapper[4823]: E0121 18:29:50.344667 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.184614 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m"] Jan 21 18:30:00 crc kubenswrapper[4823]: E0121 18:30:00.185713 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="registry-server" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.185732 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="registry-server" Jan 21 18:30:00 crc kubenswrapper[4823]: E0121 18:30:00.185752 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="extract-utilities" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.185761 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="extract-utilities" Jan 21 18:30:00 crc kubenswrapper[4823]: E0121 18:30:00.185778 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="extract-content" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.185787 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="extract-content" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.186085 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2bdd5b-2c91-4dc6-b1c4-6c29c6663dfc" containerName="registry-server" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.189635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.195291 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.195564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.200947 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m"] Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.473958 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2894201f-d7a0-4543-8105-d4c9340a98c9-secret-volume\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.474141 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n687x\" (UniqueName: \"kubernetes.io/projected/2894201f-d7a0-4543-8105-d4c9340a98c9-kube-api-access-n687x\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.474189 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2894201f-d7a0-4543-8105-d4c9340a98c9-config-volume\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.575788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2894201f-d7a0-4543-8105-d4c9340a98c9-config-volume\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.575900 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2894201f-d7a0-4543-8105-d4c9340a98c9-secret-volume\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.576960 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n687x\" (UniqueName: \"kubernetes.io/projected/2894201f-d7a0-4543-8105-d4c9340a98c9-kube-api-access-n687x\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.577978 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2894201f-d7a0-4543-8105-d4c9340a98c9-config-volume\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.582969 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2894201f-d7a0-4543-8105-d4c9340a98c9-secret-volume\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.597139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n687x\" (UniqueName: \"kubernetes.io/projected/2894201f-d7a0-4543-8105-d4c9340a98c9-kube-api-access-n687x\") pod \"collect-profiles-29483670-jn57m\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:00 crc kubenswrapper[4823]: I0121 18:30:00.815776 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:01 crc kubenswrapper[4823]: I0121 18:30:01.259023 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m"] Jan 21 18:30:01 crc kubenswrapper[4823]: I0121 18:30:01.994840 4823 generic.go:334] "Generic (PLEG): container finished" podID="2894201f-d7a0-4543-8105-d4c9340a98c9" containerID="59ababfe23956811f8562fd4ef1db35754e2a50d83a8cdb1cfe71f9bd87632ae" exitCode=0 Jan 21 18:30:01 crc kubenswrapper[4823]: I0121 18:30:01.995075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" event={"ID":"2894201f-d7a0-4543-8105-d4c9340a98c9","Type":"ContainerDied","Data":"59ababfe23956811f8562fd4ef1db35754e2a50d83a8cdb1cfe71f9bd87632ae"} Jan 21 18:30:01 crc kubenswrapper[4823]: I0121 18:30:01.995181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" event={"ID":"2894201f-d7a0-4543-8105-d4c9340a98c9","Type":"ContainerStarted","Data":"938024e75d1539b1715cf70babb848cd8214425b174b02385ef442587b1f6e05"} Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.416670 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.533718 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2894201f-d7a0-4543-8105-d4c9340a98c9-secret-volume\") pod \"2894201f-d7a0-4543-8105-d4c9340a98c9\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.534276 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n687x\" (UniqueName: \"kubernetes.io/projected/2894201f-d7a0-4543-8105-d4c9340a98c9-kube-api-access-n687x\") pod \"2894201f-d7a0-4543-8105-d4c9340a98c9\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.534511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2894201f-d7a0-4543-8105-d4c9340a98c9-config-volume\") pod \"2894201f-d7a0-4543-8105-d4c9340a98c9\" (UID: \"2894201f-d7a0-4543-8105-d4c9340a98c9\") " Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.535108 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2894201f-d7a0-4543-8105-d4c9340a98c9-config-volume" (OuterVolumeSpecName: "config-volume") pod "2894201f-d7a0-4543-8105-d4c9340a98c9" (UID: "2894201f-d7a0-4543-8105-d4c9340a98c9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.539986 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2894201f-d7a0-4543-8105-d4c9340a98c9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2894201f-d7a0-4543-8105-d4c9340a98c9" (UID: "2894201f-d7a0-4543-8105-d4c9340a98c9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.543825 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2894201f-d7a0-4543-8105-d4c9340a98c9-kube-api-access-n687x" (OuterVolumeSpecName: "kube-api-access-n687x") pod "2894201f-d7a0-4543-8105-d4c9340a98c9" (UID: "2894201f-d7a0-4543-8105-d4c9340a98c9"). InnerVolumeSpecName "kube-api-access-n687x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.637337 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2894201f-d7a0-4543-8105-d4c9340a98c9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.637399 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n687x\" (UniqueName: \"kubernetes.io/projected/2894201f-d7a0-4543-8105-d4c9340a98c9-kube-api-access-n687x\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:03 crc kubenswrapper[4823]: I0121 18:30:03.637418 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2894201f-d7a0-4543-8105-d4c9340a98c9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:04 crc kubenswrapper[4823]: I0121 18:30:04.017171 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" event={"ID":"2894201f-d7a0-4543-8105-d4c9340a98c9","Type":"ContainerDied","Data":"938024e75d1539b1715cf70babb848cd8214425b174b02385ef442587b1f6e05"} Jan 21 18:30:04 crc kubenswrapper[4823]: I0121 18:30:04.017213 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="938024e75d1539b1715cf70babb848cd8214425b174b02385ef442587b1f6e05" Jan 21 18:30:04 crc kubenswrapper[4823]: I0121 18:30:04.017256 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-jn57m" Jan 21 18:30:04 crc kubenswrapper[4823]: I0121 18:30:04.344712 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:30:04 crc kubenswrapper[4823]: E0121 18:30:04.345638 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:30:04 crc kubenswrapper[4823]: I0121 18:30:04.490475 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f"] Jan 21 18:30:04 crc kubenswrapper[4823]: I0121 18:30:04.499554 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483625-zvw5f"] Jan 21 18:30:05 crc kubenswrapper[4823]: I0121 18:30:05.362250 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c50ca9-47c8-493c-9e9b-28dedd84c304" path="/var/lib/kubelet/pods/77c50ca9-47c8-493c-9e9b-28dedd84c304/volumes" Jan 21 18:30:16 crc kubenswrapper[4823]: I0121 18:30:16.343527 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:30:16 crc kubenswrapper[4823]: E0121 18:30:16.344282 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:30:27 crc kubenswrapper[4823]: I0121 18:30:27.344433 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:30:27 crc kubenswrapper[4823]: E0121 18:30:27.345230 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:30:30 crc kubenswrapper[4823]: I0121 18:30:30.353644 4823 scope.go:117] "RemoveContainer" containerID="29f8e9600dfb7fe8337bd1e0282219c2cfea4a9de868b31f19a1d7d54c67cfdc" Jan 21 18:30:39 crc kubenswrapper[4823]: I0121 18:30:39.353414 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:30:39 crc kubenswrapper[4823]: E0121 18:30:39.354250 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:30:50 crc kubenswrapper[4823]: I0121 18:30:50.344241 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:30:50 crc kubenswrapper[4823]: E0121 18:30:50.345183 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:31:02 crc kubenswrapper[4823]: I0121 18:31:02.344675 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:31:02 crc kubenswrapper[4823]: E0121 18:31:02.346607 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:31:14 crc kubenswrapper[4823]: I0121 18:31:14.344630 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:31:14 crc kubenswrapper[4823]: E0121 18:31:14.345452 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:31:27 crc kubenswrapper[4823]: I0121 18:31:27.344048 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:31:27 crc kubenswrapper[4823]: E0121 18:31:27.345098 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:31:41 crc kubenswrapper[4823]: I0121 18:31:41.343514 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:31:41 crc kubenswrapper[4823]: E0121 18:31:41.344376 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.621288 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8js6h"] Jan 21 18:31:53 crc kubenswrapper[4823]: E0121 18:31:53.622986 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2894201f-d7a0-4543-8105-d4c9340a98c9" containerName="collect-profiles" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.623024 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2894201f-d7a0-4543-8105-d4c9340a98c9" containerName="collect-profiles" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.623436 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2894201f-d7a0-4543-8105-d4c9340a98c9" containerName="collect-profiles" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.625917 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.644126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8js6h"] Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.686228 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zkq\" (UniqueName: \"kubernetes.io/projected/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-kube-api-access-d2zkq\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.686745 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-catalog-content\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.687001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-utilities\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.790670 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-catalog-content\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.790046 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-catalog-content\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.790980 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-utilities\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.791346 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-utilities\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.791443 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zkq\" (UniqueName: \"kubernetes.io/projected/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-kube-api-access-d2zkq\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.817395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zkq\" (UniqueName: \"kubernetes.io/projected/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-kube-api-access-d2zkq\") pod \"redhat-marketplace-8js6h\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:53 crc kubenswrapper[4823]: I0121 18:31:53.948243 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:31:54 crc kubenswrapper[4823]: I0121 18:31:54.520077 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8js6h"] Jan 21 18:31:54 crc kubenswrapper[4823]: I0121 18:31:54.990285 4823 generic.go:334] "Generic (PLEG): container finished" podID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerID="b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8" exitCode=0 Jan 21 18:31:54 crc kubenswrapper[4823]: I0121 18:31:54.990593 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8js6h" event={"ID":"09cb8ad2-1a7a-4af9-a114-625fafc4cb92","Type":"ContainerDied","Data":"b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8"} Jan 21 18:31:54 crc kubenswrapper[4823]: I0121 18:31:54.990630 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8js6h" event={"ID":"09cb8ad2-1a7a-4af9-a114-625fafc4cb92","Type":"ContainerStarted","Data":"21858dbccde84bc2f3b3543cecbc2d0bc4618b23dbc20ef87743a911821783e4"} Jan 21 18:31:55 crc kubenswrapper[4823]: I0121 18:31:55.344936 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:31:55 crc kubenswrapper[4823]: E0121 18:31:55.345559 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:31:56 crc kubenswrapper[4823]: I0121 18:31:56.002515 4823 generic.go:334] "Generic (PLEG): container finished" podID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerID="b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003" exitCode=0 Jan 21 18:31:56 crc kubenswrapper[4823]: I0121 18:31:56.002587 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8js6h" event={"ID":"09cb8ad2-1a7a-4af9-a114-625fafc4cb92","Type":"ContainerDied","Data":"b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003"} Jan 21 18:31:56 crc kubenswrapper[4823]: E0121 18:31:56.161785 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09cb8ad2_1a7a_4af9_a114_625fafc4cb92.slice/crio-conmon-b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09cb8ad2_1a7a_4af9_a114_625fafc4cb92.slice/crio-b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:31:57 crc kubenswrapper[4823]: I0121 18:31:57.012765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8js6h" event={"ID":"09cb8ad2-1a7a-4af9-a114-625fafc4cb92","Type":"ContainerStarted","Data":"b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f"} Jan 21 18:31:57 crc kubenswrapper[4823]: I0121 18:31:57.037147 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8js6h" podStartSLOduration=2.633962534 podStartE2EDuration="4.037122229s" podCreationTimestamp="2026-01-21 18:31:53 +0000 UTC" firstStartedPulling="2026-01-21 18:31:54.992377233 +0000 UTC m=+4515.918508093" lastFinishedPulling="2026-01-21 18:31:56.395536928 +0000 UTC m=+4517.321667788" observedRunningTime="2026-01-21 18:31:57.029310276 +0000 UTC m=+4517.955441146" watchObservedRunningTime="2026-01-21 18:31:57.037122229 +0000 UTC m=+4517.963253089" Jan 21 18:32:03 crc kubenswrapper[4823]: I0121 18:32:03.949134 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:32:03 crc kubenswrapper[4823]: I0121 18:32:03.949723 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:32:04 crc kubenswrapper[4823]: I0121 18:32:04.006207 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:32:04 crc kubenswrapper[4823]: I0121 18:32:04.119837 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:32:04 crc kubenswrapper[4823]: I0121 18:32:04.249745 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8js6h"] Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.088839 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8js6h" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="registry-server" containerID="cri-o://b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f" gracePeriod=2 Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.770870 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.847721 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-catalog-content\") pod \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.847896 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2zkq\" (UniqueName: \"kubernetes.io/projected/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-kube-api-access-d2zkq\") pod \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.848083 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-utilities\") pod \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\" (UID: \"09cb8ad2-1a7a-4af9-a114-625fafc4cb92\") " Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.849569 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-utilities" (OuterVolumeSpecName: "utilities") pod "09cb8ad2-1a7a-4af9-a114-625fafc4cb92" (UID: "09cb8ad2-1a7a-4af9-a114-625fafc4cb92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.855238 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-kube-api-access-d2zkq" (OuterVolumeSpecName: "kube-api-access-d2zkq") pod "09cb8ad2-1a7a-4af9-a114-625fafc4cb92" (UID: "09cb8ad2-1a7a-4af9-a114-625fafc4cb92"). InnerVolumeSpecName "kube-api-access-d2zkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.870441 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09cb8ad2-1a7a-4af9-a114-625fafc4cb92" (UID: "09cb8ad2-1a7a-4af9-a114-625fafc4cb92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.950009 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.950254 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:32:06 crc kubenswrapper[4823]: I0121 18:32:06.950268 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2zkq\" (UniqueName: \"kubernetes.io/projected/09cb8ad2-1a7a-4af9-a114-625fafc4cb92-kube-api-access-d2zkq\") on node \"crc\" DevicePath \"\"" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.102719 4823 generic.go:334] "Generic (PLEG): container finished" podID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerID="b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f" exitCode=0 Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.102777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8js6h" event={"ID":"09cb8ad2-1a7a-4af9-a114-625fafc4cb92","Type":"ContainerDied","Data":"b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f"} Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.102811 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8js6h" event={"ID":"09cb8ad2-1a7a-4af9-a114-625fafc4cb92","Type":"ContainerDied","Data":"21858dbccde84bc2f3b3543cecbc2d0bc4618b23dbc20ef87743a911821783e4"} Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.102836 4823 scope.go:117] "RemoveContainer" containerID="b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.102880 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8js6h" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.141149 4823 scope.go:117] "RemoveContainer" containerID="b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.150898 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8js6h"] Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.160552 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8js6h"] Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.164557 4823 scope.go:117] "RemoveContainer" containerID="b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.209174 4823 scope.go:117] "RemoveContainer" containerID="b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f" Jan 21 18:32:07 crc kubenswrapper[4823]: E0121 18:32:07.210561 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f\": container with ID starting with b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f not found: ID does not exist" containerID="b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.210630 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f"} err="failed to get container status \"b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f\": rpc error: code = NotFound desc = could not find container \"b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f\": container with ID starting with b35ee593e39a303466ca2bc97253aa216a3bd6921713e7fd3a764d805876696f not found: ID does not exist" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.210660 4823 scope.go:117] "RemoveContainer" containerID="b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003" Jan 21 18:32:07 crc kubenswrapper[4823]: E0121 18:32:07.211131 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003\": container with ID starting with b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003 not found: ID does not exist" containerID="b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.211202 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003"} err="failed to get container status \"b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003\": rpc error: code = NotFound desc = could not find container \"b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003\": container with ID starting with b0b665cdb163942892b2b50d8f96d580784031897fb0832e94f76de964006003 not found: ID does not exist" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.211233 4823 scope.go:117] "RemoveContainer" containerID="b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8" Jan 21 18:32:07 crc kubenswrapper[4823]: E0121 18:32:07.211639 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8\": container with ID starting with b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8 not found: ID does not exist" containerID="b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.211680 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8"} err="failed to get container status \"b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8\": rpc error: code = NotFound desc = could not find container \"b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8\": container with ID starting with b36b7ee8975fe1bbb53c5df5adb5bad6536402d5a43df3493189fc7c114045e8 not found: ID does not exist" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.343568 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:32:07 crc kubenswrapper[4823]: E0121 18:32:07.343937 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:32:07 crc kubenswrapper[4823]: I0121 18:32:07.355446 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" path="/var/lib/kubelet/pods/09cb8ad2-1a7a-4af9-a114-625fafc4cb92/volumes" Jan 21 18:32:18 crc kubenswrapper[4823]: I0121 18:32:18.344080 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:32:18 crc kubenswrapper[4823]: E0121 18:32:18.344767 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:32:30 crc kubenswrapper[4823]: I0121 18:32:30.343591 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:32:30 crc kubenswrapper[4823]: E0121 18:32:30.344397 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:32:45 crc kubenswrapper[4823]: I0121 18:32:45.344362 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:32:45 crc kubenswrapper[4823]: E0121 18:32:45.345157 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:32:56 crc kubenswrapper[4823]: I0121 18:32:56.343610 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:32:56 crc kubenswrapper[4823]: E0121 18:32:56.344333 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:33:07 crc kubenswrapper[4823]: I0121 18:33:07.343435 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:33:07 crc kubenswrapper[4823]: E0121 18:33:07.345790 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.442555 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6fd7j"] Jan 21 18:33:16 crc kubenswrapper[4823]: E0121 18:33:16.443691 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="extract-utilities" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.443714 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="extract-utilities" Jan 21 18:33:16 crc kubenswrapper[4823]: E0121 18:33:16.443749 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="extract-content" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.443759 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="extract-content" Jan 21 18:33:16 crc kubenswrapper[4823]: E0121 18:33:16.443789 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="registry-server" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.443798 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="registry-server" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.448096 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cb8ad2-1a7a-4af9-a114-625fafc4cb92" containerName="registry-server" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.453523 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.478228 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fd7j"] Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.540105 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gtj\" (UniqueName: \"kubernetes.io/projected/69feff9f-a200-4c79-b680-a11e20c15fd4-kube-api-access-f6gtj\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.540220 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-catalog-content\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.540579 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-utilities\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.642410 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-utilities\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.642535 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6gtj\" (UniqueName: \"kubernetes.io/projected/69feff9f-a200-4c79-b680-a11e20c15fd4-kube-api-access-f6gtj\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.642577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-catalog-content\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.643104 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-catalog-content\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.643103 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-utilities\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.662222 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6gtj\" (UniqueName: \"kubernetes.io/projected/69feff9f-a200-4c79-b680-a11e20c15fd4-kube-api-access-f6gtj\") pod \"redhat-operators-6fd7j\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:16 crc kubenswrapper[4823]: I0121 18:33:16.806734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:17 crc kubenswrapper[4823]: I0121 18:33:17.289066 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fd7j"] Jan 21 18:33:17 crc kubenswrapper[4823]: I0121 18:33:17.761570 4823 generic.go:334] "Generic (PLEG): container finished" podID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerID="a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d" exitCode=0 Jan 21 18:33:17 crc kubenswrapper[4823]: I0121 18:33:17.761902 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerDied","Data":"a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d"} Jan 21 18:33:17 crc kubenswrapper[4823]: I0121 18:33:17.761937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerStarted","Data":"9f5b2a676540b9e9bb4b8fbddadc380c4824e8ba3e80bd7f38de607c15c21918"} Jan 21 18:33:17 crc kubenswrapper[4823]: I0121 18:33:17.763871 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:33:19 crc kubenswrapper[4823]: I0121 18:33:19.779601 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerStarted","Data":"b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5"} Jan 21 18:33:21 crc kubenswrapper[4823]: I0121 18:33:21.344474 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:33:22 crc kubenswrapper[4823]: I0121 18:33:22.804913 4823 generic.go:334] "Generic (PLEG): container finished" podID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerID="b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5" exitCode=0 Jan 21 18:33:22 crc kubenswrapper[4823]: I0121 18:33:22.805045 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerDied","Data":"b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5"} Jan 21 18:33:23 crc kubenswrapper[4823]: I0121 18:33:23.819311 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"d13a8cbcb305191164a784438e1527052af3040a99d21b2f0776b156c04d53b4"} Jan 21 18:33:24 crc kubenswrapper[4823]: I0121 18:33:24.831908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerStarted","Data":"ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276"} Jan 21 18:33:24 crc kubenswrapper[4823]: I0121 18:33:24.855149 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6fd7j" podStartSLOduration=2.454645839 podStartE2EDuration="8.855128412s" podCreationTimestamp="2026-01-21 18:33:16 +0000 UTC" firstStartedPulling="2026-01-21 18:33:17.763621727 +0000 UTC m=+4598.689752587" lastFinishedPulling="2026-01-21 18:33:24.1641043 +0000 UTC m=+4605.090235160" observedRunningTime="2026-01-21 18:33:24.849982394 +0000 UTC m=+4605.776113274" watchObservedRunningTime="2026-01-21 18:33:24.855128412 +0000 UTC m=+4605.781259272" Jan 21 18:33:26 crc kubenswrapper[4823]: I0121 18:33:26.808747 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:26 crc kubenswrapper[4823]: I0121 18:33:26.809436 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:27 crc kubenswrapper[4823]: I0121 18:33:27.855170 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6fd7j" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="registry-server" probeResult="failure" output=< Jan 21 18:33:27 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 21 18:33:27 crc kubenswrapper[4823]: > Jan 21 18:33:36 crc kubenswrapper[4823]: I0121 18:33:36.858850 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:36 crc kubenswrapper[4823]: I0121 18:33:36.908770 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:37 crc kubenswrapper[4823]: I0121 18:33:37.096245 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fd7j"] Jan 21 18:33:37 crc kubenswrapper[4823]: I0121 18:33:37.969801 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6fd7j" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="registry-server" containerID="cri-o://ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276" gracePeriod=2 Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.492248 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.610316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6gtj\" (UniqueName: \"kubernetes.io/projected/69feff9f-a200-4c79-b680-a11e20c15fd4-kube-api-access-f6gtj\") pod \"69feff9f-a200-4c79-b680-a11e20c15fd4\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.610936 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-utilities\") pod \"69feff9f-a200-4c79-b680-a11e20c15fd4\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.611141 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-catalog-content\") pod \"69feff9f-a200-4c79-b680-a11e20c15fd4\" (UID: \"69feff9f-a200-4c79-b680-a11e20c15fd4\") " Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.612094 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-utilities" (OuterVolumeSpecName: "utilities") pod "69feff9f-a200-4c79-b680-a11e20c15fd4" (UID: "69feff9f-a200-4c79-b680-a11e20c15fd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.617144 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69feff9f-a200-4c79-b680-a11e20c15fd4-kube-api-access-f6gtj" (OuterVolumeSpecName: "kube-api-access-f6gtj") pod "69feff9f-a200-4c79-b680-a11e20c15fd4" (UID: "69feff9f-a200-4c79-b680-a11e20c15fd4"). InnerVolumeSpecName "kube-api-access-f6gtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.712305 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6gtj\" (UniqueName: \"kubernetes.io/projected/69feff9f-a200-4c79-b680-a11e20c15fd4-kube-api-access-f6gtj\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.712517 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.734766 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69feff9f-a200-4c79-b680-a11e20c15fd4" (UID: "69feff9f-a200-4c79-b680-a11e20c15fd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.814185 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69feff9f-a200-4c79-b680-a11e20c15fd4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.981952 4823 generic.go:334] "Generic (PLEG): container finished" podID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerID="ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276" exitCode=0 Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.982001 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerDied","Data":"ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276"} Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.982273 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fd7j" event={"ID":"69feff9f-a200-4c79-b680-a11e20c15fd4","Type":"ContainerDied","Data":"9f5b2a676540b9e9bb4b8fbddadc380c4824e8ba3e80bd7f38de607c15c21918"} Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.982302 4823 scope.go:117] "RemoveContainer" containerID="ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276" Jan 21 18:33:38 crc kubenswrapper[4823]: I0121 18:33:38.982074 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fd7j" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.005450 4823 scope.go:117] "RemoveContainer" containerID="b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.023430 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fd7j"] Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.031528 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6fd7j"] Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.042819 4823 scope.go:117] "RemoveContainer" containerID="a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.077664 4823 scope.go:117] "RemoveContainer" containerID="ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276" Jan 21 18:33:39 crc kubenswrapper[4823]: E0121 18:33:39.078158 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276\": container with ID starting with ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276 not found: ID does not exist" containerID="ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.078202 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276"} err="failed to get container status \"ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276\": rpc error: code = NotFound desc = could not find container \"ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276\": container with ID starting with ae5130da753aaa1a2acb78b9bfee5ba41f5d253cb8247fce34527fb5cb82b276 not found: ID does not exist" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.078231 4823 scope.go:117] "RemoveContainer" containerID="b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5" Jan 21 18:33:39 crc kubenswrapper[4823]: E0121 18:33:39.078741 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5\": container with ID starting with b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5 not found: ID does not exist" containerID="b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.078786 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5"} err="failed to get container status \"b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5\": rpc error: code = NotFound desc = could not find container \"b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5\": container with ID starting with b2836f547fc8a91bb5e48221e97a85b43c9245cea9bda862f4fe896da0baf3d5 not found: ID does not exist" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.078817 4823 scope.go:117] "RemoveContainer" containerID="a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d" Jan 21 18:33:39 crc kubenswrapper[4823]: E0121 18:33:39.079400 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d\": container with ID starting with a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d not found: ID does not exist" containerID="a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.079432 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d"} err="failed to get container status \"a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d\": rpc error: code = NotFound desc = could not find container \"a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d\": container with ID starting with a2b65730e877ec90c036bb34d6a4f5bd848a04940faccbcc1084c919fa223d5d not found: ID does not exist" Jan 21 18:33:39 crc kubenswrapper[4823]: I0121 18:33:39.358352 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" path="/var/lib/kubelet/pods/69feff9f-a200-4c79-b680-a11e20c15fd4/volumes" Jan 21 18:35:45 crc kubenswrapper[4823]: I0121 18:35:45.070300 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:35:45 crc kubenswrapper[4823]: I0121 18:35:45.070739 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.893125 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lvtb"] Jan 21 18:36:03 crc kubenswrapper[4823]: E0121 18:36:03.894736 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="extract-content" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.894814 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="extract-content" Jan 21 18:36:03 crc kubenswrapper[4823]: E0121 18:36:03.894900 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="extract-utilities" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.894966 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="extract-utilities" Jan 21 18:36:03 crc kubenswrapper[4823]: E0121 18:36:03.895024 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="registry-server" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.895074 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="registry-server" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.895304 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="69feff9f-a200-4c79-b680-a11e20c15fd4" containerName="registry-server" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.896690 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:03 crc kubenswrapper[4823]: I0121 18:36:03.909346 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lvtb"] Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.085182 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-utilities\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.085408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-catalog-content\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.085473 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66d62\" (UniqueName: \"kubernetes.io/projected/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-kube-api-access-66d62\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.186867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-utilities\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.187066 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-catalog-content\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.187165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66d62\" (UniqueName: \"kubernetes.io/projected/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-kube-api-access-66d62\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.187508 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-utilities\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.187556 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-catalog-content\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.206622 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66d62\" (UniqueName: \"kubernetes.io/projected/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-kube-api-access-66d62\") pod \"certified-operators-8lvtb\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.217413 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:04 crc kubenswrapper[4823]: I0121 18:36:04.742833 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lvtb"] Jan 21 18:36:05 crc kubenswrapper[4823]: I0121 18:36:05.330741 4823 generic.go:334] "Generic (PLEG): container finished" podID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerID="8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd" exitCode=0 Jan 21 18:36:05 crc kubenswrapper[4823]: I0121 18:36:05.330824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerDied","Data":"8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd"} Jan 21 18:36:05 crc kubenswrapper[4823]: I0121 18:36:05.331101 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerStarted","Data":"34ae9bc41e5212395cffcf8ebe9d619a32b07044887f07ba9369edc63d4d5453"} Jan 21 18:36:06 crc kubenswrapper[4823]: I0121 18:36:06.346765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerStarted","Data":"de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2"} Jan 21 18:36:07 crc kubenswrapper[4823]: I0121 18:36:07.359074 4823 generic.go:334] "Generic (PLEG): container finished" podID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerID="de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2" exitCode=0 Jan 21 18:36:07 crc kubenswrapper[4823]: I0121 18:36:07.359124 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerDied","Data":"de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2"} Jan 21 18:36:08 crc kubenswrapper[4823]: I0121 18:36:08.370515 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerStarted","Data":"25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873"} Jan 21 18:36:08 crc kubenswrapper[4823]: I0121 18:36:08.393900 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lvtb" podStartSLOduration=2.911188477 podStartE2EDuration="5.393881032s" podCreationTimestamp="2026-01-21 18:36:03 +0000 UTC" firstStartedPulling="2026-01-21 18:36:05.334076659 +0000 UTC m=+4766.260207529" lastFinishedPulling="2026-01-21 18:36:07.816769214 +0000 UTC m=+4768.742900084" observedRunningTime="2026-01-21 18:36:08.387396042 +0000 UTC m=+4769.313526902" watchObservedRunningTime="2026-01-21 18:36:08.393881032 +0000 UTC m=+4769.320011892" Jan 21 18:36:14 crc kubenswrapper[4823]: I0121 18:36:14.217681 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:14 crc kubenswrapper[4823]: I0121 18:36:14.218124 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:14 crc kubenswrapper[4823]: I0121 18:36:14.267690 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:14 crc kubenswrapper[4823]: I0121 18:36:14.484501 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:14 crc kubenswrapper[4823]: I0121 18:36:14.532599 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lvtb"] Jan 21 18:36:15 crc kubenswrapper[4823]: I0121 18:36:15.070360 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:36:15 crc kubenswrapper[4823]: I0121 18:36:15.070447 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:36:16 crc kubenswrapper[4823]: I0121 18:36:16.449153 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8lvtb" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="registry-server" containerID="cri-o://25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873" gracePeriod=2 Jan 21 18:36:16 crc kubenswrapper[4823]: I0121 18:36:16.909132 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.041434 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66d62\" (UniqueName: \"kubernetes.io/projected/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-kube-api-access-66d62\") pod \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.041666 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-catalog-content\") pod \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.041902 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-utilities\") pod \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\" (UID: \"c7a7220f-f21d-43e6-a0ab-d12189b2d59d\") " Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.043075 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-utilities" (OuterVolumeSpecName: "utilities") pod "c7a7220f-f21d-43e6-a0ab-d12189b2d59d" (UID: "c7a7220f-f21d-43e6-a0ab-d12189b2d59d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.048514 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-kube-api-access-66d62" (OuterVolumeSpecName: "kube-api-access-66d62") pod "c7a7220f-f21d-43e6-a0ab-d12189b2d59d" (UID: "c7a7220f-f21d-43e6-a0ab-d12189b2d59d"). InnerVolumeSpecName "kube-api-access-66d62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.090630 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7a7220f-f21d-43e6-a0ab-d12189b2d59d" (UID: "c7a7220f-f21d-43e6-a0ab-d12189b2d59d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.143731 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.143778 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66d62\" (UniqueName: \"kubernetes.io/projected/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-kube-api-access-66d62\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.143794 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7a7220f-f21d-43e6-a0ab-d12189b2d59d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.459821 4823 generic.go:334] "Generic (PLEG): container finished" podID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerID="25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873" exitCode=0 Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.459921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerDied","Data":"25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873"} Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.459952 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lvtb" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.459999 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lvtb" event={"ID":"c7a7220f-f21d-43e6-a0ab-d12189b2d59d","Type":"ContainerDied","Data":"34ae9bc41e5212395cffcf8ebe9d619a32b07044887f07ba9369edc63d4d5453"} Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.460023 4823 scope.go:117] "RemoveContainer" containerID="25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.487623 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lvtb"] Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.491551 4823 scope.go:117] "RemoveContainer" containerID="de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.497497 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8lvtb"] Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.855186 4823 scope.go:117] "RemoveContainer" containerID="8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.916934 4823 scope.go:117] "RemoveContainer" containerID="25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873" Jan 21 18:36:17 crc kubenswrapper[4823]: E0121 18:36:17.917549 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873\": container with ID starting with 25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873 not found: ID does not exist" containerID="25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.917594 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873"} err="failed to get container status \"25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873\": rpc error: code = NotFound desc = could not find container \"25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873\": container with ID starting with 25dccf6e374156fbcf399b08b0c49cd96ddcc761c9fb52b4db3fe2fc3d970873 not found: ID does not exist" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.917622 4823 scope.go:117] "RemoveContainer" containerID="de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2" Jan 21 18:36:17 crc kubenswrapper[4823]: E0121 18:36:17.917948 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2\": container with ID starting with de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2 not found: ID does not exist" containerID="de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.917986 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2"} err="failed to get container status \"de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2\": rpc error: code = NotFound desc = could not find container \"de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2\": container with ID starting with de54b37e4151b057bfad6a5e38530c58d4aa2cfb57f1440f5380c3d69ec78ac2 not found: ID does not exist" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.918013 4823 scope.go:117] "RemoveContainer" containerID="8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd" Jan 21 18:36:17 crc kubenswrapper[4823]: E0121 18:36:17.918341 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd\": container with ID starting with 8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd not found: ID does not exist" containerID="8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd" Jan 21 18:36:17 crc kubenswrapper[4823]: I0121 18:36:17.918373 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd"} err="failed to get container status \"8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd\": rpc error: code = NotFound desc = could not find container \"8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd\": container with ID starting with 8f9a7b187cbd66419d2a40458164d826788d8ad6f73ec97d031b179148a934fd not found: ID does not exist" Jan 21 18:36:19 crc kubenswrapper[4823]: I0121 18:36:19.358712 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" path="/var/lib/kubelet/pods/c7a7220f-f21d-43e6-a0ab-d12189b2d59d/volumes" Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.070351 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.071024 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.071103 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.074070 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d13a8cbcb305191164a784438e1527052af3040a99d21b2f0776b156c04d53b4"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.074221 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://d13a8cbcb305191164a784438e1527052af3040a99d21b2f0776b156c04d53b4" gracePeriod=600 Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.756585 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="d13a8cbcb305191164a784438e1527052af3040a99d21b2f0776b156c04d53b4" exitCode=0 Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.756663 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"d13a8cbcb305191164a784438e1527052af3040a99d21b2f0776b156c04d53b4"} Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.757197 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64"} Jan 21 18:36:45 crc kubenswrapper[4823]: I0121 18:36:45.757222 4823 scope.go:117] "RemoveContainer" containerID="108b748218b02fdeeb4a1c250591bc91c9df344bd951be90825c52f2c3ef23ed" Jan 21 18:38:45 crc kubenswrapper[4823]: I0121 18:38:45.070638 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:38:45 crc kubenswrapper[4823]: I0121 18:38:45.071347 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:39:15 crc kubenswrapper[4823]: I0121 18:39:15.071089 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:39:15 crc kubenswrapper[4823]: I0121 18:39:15.071671 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.070833 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.071411 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.071461 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.072278 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.072329 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" gracePeriod=600 Jan 21 18:39:45 crc kubenswrapper[4823]: E0121 18:39:45.209155 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.551179 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" exitCode=0 Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.551325 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64"} Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.551631 4823 scope.go:117] "RemoveContainer" containerID="d13a8cbcb305191164a784438e1527052af3040a99d21b2f0776b156c04d53b4" Jan 21 18:39:45 crc kubenswrapper[4823]: I0121 18:39:45.552273 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:39:45 crc kubenswrapper[4823]: E0121 18:39:45.552568 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:39:56 crc kubenswrapper[4823]: I0121 18:39:56.344905 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:39:56 crc kubenswrapper[4823]: E0121 18:39:56.345871 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:40:11 crc kubenswrapper[4823]: I0121 18:40:11.344011 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:40:11 crc kubenswrapper[4823]: E0121 18:40:11.346080 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.343449 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:40:24 crc kubenswrapper[4823]: E0121 18:40:24.344202 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.890550 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fd57j"] Jan 21 18:40:24 crc kubenswrapper[4823]: E0121 18:40:24.891029 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="extract-content" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.891048 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="extract-content" Jan 21 18:40:24 crc kubenswrapper[4823]: E0121 18:40:24.891061 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="registry-server" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.891068 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="registry-server" Jan 21 18:40:24 crc kubenswrapper[4823]: E0121 18:40:24.891092 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="extract-utilities" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.891099 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="extract-utilities" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.891296 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a7220f-f21d-43e6-a0ab-d12189b2d59d" containerName="registry-server" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.892671 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.906310 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd57j"] Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.996620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-catalog-content\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.996787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-utilities\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:24 crc kubenswrapper[4823]: I0121 18:40:24.996814 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jj77\" (UniqueName: \"kubernetes.io/projected/cc075f21-5271-4dd2-981c-c5cddaaf9107-kube-api-access-7jj77\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.098663 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-utilities\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.098709 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jj77\" (UniqueName: \"kubernetes.io/projected/cc075f21-5271-4dd2-981c-c5cddaaf9107-kube-api-access-7jj77\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.098763 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-catalog-content\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.099215 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-utilities\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.099247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-catalog-content\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.118400 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jj77\" (UniqueName: \"kubernetes.io/projected/cc075f21-5271-4dd2-981c-c5cddaaf9107-kube-api-access-7jj77\") pod \"community-operators-fd57j\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.265064 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.769807 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd57j"] Jan 21 18:40:25 crc kubenswrapper[4823]: I0121 18:40:25.931420 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerStarted","Data":"b1fec145ef9bb35d4af1744c8b36c92c789d7cf675b8ea4a3bec3dd2689931f0"} Jan 21 18:40:26 crc kubenswrapper[4823]: I0121 18:40:26.942220 4823 generic.go:334] "Generic (PLEG): container finished" podID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerID="4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6" exitCode=0 Jan 21 18:40:26 crc kubenswrapper[4823]: I0121 18:40:26.942495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerDied","Data":"4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6"} Jan 21 18:40:26 crc kubenswrapper[4823]: I0121 18:40:26.944725 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:40:27 crc kubenswrapper[4823]: I0121 18:40:27.953705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerStarted","Data":"7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016"} Jan 21 18:40:28 crc kubenswrapper[4823]: I0121 18:40:28.963915 4823 generic.go:334] "Generic (PLEG): container finished" podID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerID="7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016" exitCode=0 Jan 21 18:40:28 crc kubenswrapper[4823]: I0121 18:40:28.964075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerDied","Data":"7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016"} Jan 21 18:40:29 crc kubenswrapper[4823]: I0121 18:40:29.974785 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerStarted","Data":"6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536"} Jan 21 18:40:29 crc kubenswrapper[4823]: I0121 18:40:29.998439 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fd57j" podStartSLOduration=3.598944999 podStartE2EDuration="5.99841729s" podCreationTimestamp="2026-01-21 18:40:24 +0000 UTC" firstStartedPulling="2026-01-21 18:40:26.944491819 +0000 UTC m=+5027.870622669" lastFinishedPulling="2026-01-21 18:40:29.34396408 +0000 UTC m=+5030.270094960" observedRunningTime="2026-01-21 18:40:29.995073487 +0000 UTC m=+5030.921204367" watchObservedRunningTime="2026-01-21 18:40:29.99841729 +0000 UTC m=+5030.924548150" Jan 21 18:40:35 crc kubenswrapper[4823]: I0121 18:40:35.265934 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:35 crc kubenswrapper[4823]: I0121 18:40:35.266232 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:35 crc kubenswrapper[4823]: I0121 18:40:35.314151 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:35 crc kubenswrapper[4823]: I0121 18:40:35.343633 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:40:35 crc kubenswrapper[4823]: E0121 18:40:35.343973 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:40:36 crc kubenswrapper[4823]: I0121 18:40:36.100278 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:38 crc kubenswrapper[4823]: I0121 18:40:38.880878 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd57j"] Jan 21 18:40:38 crc kubenswrapper[4823]: I0121 18:40:38.883309 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fd57j" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="registry-server" containerID="cri-o://6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536" gracePeriod=2 Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.061944 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.067493 4823 generic.go:334] "Generic (PLEG): container finished" podID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerID="6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536" exitCode=0 Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.067550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerDied","Data":"6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536"} Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.067583 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd57j" event={"ID":"cc075f21-5271-4dd2-981c-c5cddaaf9107","Type":"ContainerDied","Data":"b1fec145ef9bb35d4af1744c8b36c92c789d7cf675b8ea4a3bec3dd2689931f0"} Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.067605 4823 scope.go:117] "RemoveContainer" containerID="6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.116216 4823 scope.go:117] "RemoveContainer" containerID="7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.174554 4823 scope.go:117] "RemoveContainer" containerID="4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.207703 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-utilities\") pod \"cc075f21-5271-4dd2-981c-c5cddaaf9107\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.207789 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jj77\" (UniqueName: \"kubernetes.io/projected/cc075f21-5271-4dd2-981c-c5cddaaf9107-kube-api-access-7jj77\") pod \"cc075f21-5271-4dd2-981c-c5cddaaf9107\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.207890 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-catalog-content\") pod \"cc075f21-5271-4dd2-981c-c5cddaaf9107\" (UID: \"cc075f21-5271-4dd2-981c-c5cddaaf9107\") " Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.209733 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-utilities" (OuterVolumeSpecName: "utilities") pod "cc075f21-5271-4dd2-981c-c5cddaaf9107" (UID: "cc075f21-5271-4dd2-981c-c5cddaaf9107"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.211694 4823 scope.go:117] "RemoveContainer" containerID="6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536" Jan 21 18:40:40 crc kubenswrapper[4823]: E0121 18:40:40.216624 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536\": container with ID starting with 6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536 not found: ID does not exist" containerID="6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.216679 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536"} err="failed to get container status \"6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536\": rpc error: code = NotFound desc = could not find container \"6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536\": container with ID starting with 6a77b3c51ab854e277098740091bf397da68c0b73d50717444a9f3323cffd536 not found: ID does not exist" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.216707 4823 scope.go:117] "RemoveContainer" containerID="7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016" Jan 21 18:40:40 crc kubenswrapper[4823]: E0121 18:40:40.217064 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016\": container with ID starting with 7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016 not found: ID does not exist" containerID="7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.217098 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016"} err="failed to get container status \"7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016\": rpc error: code = NotFound desc = could not find container \"7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016\": container with ID starting with 7a9e93e0fc8d32924356d487412ac8785c0fa6b2fbd4eb03b998023e513fa016 not found: ID does not exist" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.217115 4823 scope.go:117] "RemoveContainer" containerID="4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6" Jan 21 18:40:40 crc kubenswrapper[4823]: E0121 18:40:40.217515 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6\": container with ID starting with 4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6 not found: ID does not exist" containerID="4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.217543 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6"} err="failed to get container status \"4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6\": rpc error: code = NotFound desc = could not find container \"4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6\": container with ID starting with 4227cfaac66d5c4000d070b69ab60d79b79daac747c5ec0666338eead3f99ff6 not found: ID does not exist" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.220124 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc075f21-5271-4dd2-981c-c5cddaaf9107-kube-api-access-7jj77" (OuterVolumeSpecName: "kube-api-access-7jj77") pod "cc075f21-5271-4dd2-981c-c5cddaaf9107" (UID: "cc075f21-5271-4dd2-981c-c5cddaaf9107"). InnerVolumeSpecName "kube-api-access-7jj77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.265599 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc075f21-5271-4dd2-981c-c5cddaaf9107" (UID: "cc075f21-5271-4dd2-981c-c5cddaaf9107"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.310256 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.310299 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jj77\" (UniqueName: \"kubernetes.io/projected/cc075f21-5271-4dd2-981c-c5cddaaf9107-kube-api-access-7jj77\") on node \"crc\" DevicePath \"\"" Jan 21 18:40:40 crc kubenswrapper[4823]: I0121 18:40:40.310311 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075f21-5271-4dd2-981c-c5cddaaf9107-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:40:41 crc kubenswrapper[4823]: I0121 18:40:41.076543 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd57j" Jan 21 18:40:41 crc kubenswrapper[4823]: I0121 18:40:41.110512 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd57j"] Jan 21 18:40:41 crc kubenswrapper[4823]: I0121 18:40:41.119828 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fd57j"] Jan 21 18:40:41 crc kubenswrapper[4823]: I0121 18:40:41.353933 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" path="/var/lib/kubelet/pods/cc075f21-5271-4dd2-981c-c5cddaaf9107/volumes" Jan 21 18:40:47 crc kubenswrapper[4823]: I0121 18:40:47.344148 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:40:47 crc kubenswrapper[4823]: E0121 18:40:47.344954 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:40:59 crc kubenswrapper[4823]: I0121 18:40:59.353344 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:40:59 crc kubenswrapper[4823]: E0121 18:40:59.354255 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:41:14 crc kubenswrapper[4823]: I0121 18:41:14.343765 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:41:14 crc kubenswrapper[4823]: E0121 18:41:14.344641 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:41:29 crc kubenswrapper[4823]: I0121 18:41:29.354112 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:41:29 crc kubenswrapper[4823]: E0121 18:41:29.355793 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:41:41 crc kubenswrapper[4823]: I0121 18:41:41.343979 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:41:41 crc kubenswrapper[4823]: E0121 18:41:41.344643 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:41:52 crc kubenswrapper[4823]: I0121 18:41:52.345220 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:41:52 crc kubenswrapper[4823]: E0121 18:41:52.346145 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.963416 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lfr6n"] Jan 21 18:41:59 crc kubenswrapper[4823]: E0121 18:41:59.964692 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="registry-server" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.964710 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="registry-server" Jan 21 18:41:59 crc kubenswrapper[4823]: E0121 18:41:59.964761 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="extract-utilities" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.964768 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="extract-utilities" Jan 21 18:41:59 crc kubenswrapper[4823]: E0121 18:41:59.964780 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="extract-content" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.964786 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="extract-content" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.965085 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc075f21-5271-4dd2-981c-c5cddaaf9107" containerName="registry-server" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.967103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.975023 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfr6n"] Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.977005 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-utilities\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.977089 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65gjv\" (UniqueName: \"kubernetes.io/projected/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-kube-api-access-65gjv\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:41:59 crc kubenswrapper[4823]: I0121 18:41:59.977222 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-catalog-content\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.079293 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-utilities\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.079649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65gjv\" (UniqueName: \"kubernetes.io/projected/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-kube-api-access-65gjv\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.079725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-catalog-content\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.080413 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-utilities\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.080476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-catalog-content\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.101024 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65gjv\" (UniqueName: \"kubernetes.io/projected/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-kube-api-access-65gjv\") pod \"redhat-marketplace-lfr6n\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.305225 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.802176 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfr6n"] Jan 21 18:42:00 crc kubenswrapper[4823]: I0121 18:42:00.831208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerStarted","Data":"5bd705450db77ea0dc6db7bb6c921bc1253661457d5d6a072d6a2808684af5be"} Jan 21 18:42:01 crc kubenswrapper[4823]: I0121 18:42:01.841505 4823 generic.go:334] "Generic (PLEG): container finished" podID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerID="5a1fb77badec8e12bb0b99319826cbba975d4b37b2f808c17356094bf3bc4360" exitCode=0 Jan 21 18:42:01 crc kubenswrapper[4823]: I0121 18:42:01.841616 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerDied","Data":"5a1fb77badec8e12bb0b99319826cbba975d4b37b2f808c17356094bf3bc4360"} Jan 21 18:42:02 crc kubenswrapper[4823]: I0121 18:42:02.852396 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerStarted","Data":"869f29042d3e577b50544edcbf88e6a3c03ae4078cc63833d42d0ce2ec289939"} Jan 21 18:42:03 crc kubenswrapper[4823]: I0121 18:42:03.864357 4823 generic.go:334] "Generic (PLEG): container finished" podID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerID="869f29042d3e577b50544edcbf88e6a3c03ae4078cc63833d42d0ce2ec289939" exitCode=0 Jan 21 18:42:03 crc kubenswrapper[4823]: I0121 18:42:03.864401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerDied","Data":"869f29042d3e577b50544edcbf88e6a3c03ae4078cc63833d42d0ce2ec289939"} Jan 21 18:42:04 crc kubenswrapper[4823]: I0121 18:42:04.344278 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:42:04 crc kubenswrapper[4823]: E0121 18:42:04.344612 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:42:05 crc kubenswrapper[4823]: I0121 18:42:05.886539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerStarted","Data":"d0ad0fef5e7283814b2682505787c1a18a6f7132ae9b9c875984c8a35707552e"} Jan 21 18:42:05 crc kubenswrapper[4823]: I0121 18:42:05.916312 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lfr6n" podStartSLOduration=3.976360503 podStartE2EDuration="6.916285734s" podCreationTimestamp="2026-01-21 18:41:59 +0000 UTC" firstStartedPulling="2026-01-21 18:42:01.843703056 +0000 UTC m=+5122.769833926" lastFinishedPulling="2026-01-21 18:42:04.783628287 +0000 UTC m=+5125.709759157" observedRunningTime="2026-01-21 18:42:05.913543967 +0000 UTC m=+5126.839674847" watchObservedRunningTime="2026-01-21 18:42:05.916285734 +0000 UTC m=+5126.842416584" Jan 21 18:42:10 crc kubenswrapper[4823]: I0121 18:42:10.305505 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:10 crc kubenswrapper[4823]: I0121 18:42:10.306069 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:10 crc kubenswrapper[4823]: I0121 18:42:10.359684 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:11 crc kubenswrapper[4823]: I0121 18:42:11.003919 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:11 crc kubenswrapper[4823]: I0121 18:42:11.063789 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfr6n"] Jan 21 18:42:12 crc kubenswrapper[4823]: I0121 18:42:12.953735 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lfr6n" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="registry-server" containerID="cri-o://d0ad0fef5e7283814b2682505787c1a18a6f7132ae9b9c875984c8a35707552e" gracePeriod=2 Jan 21 18:42:13 crc kubenswrapper[4823]: I0121 18:42:13.966130 4823 generic.go:334] "Generic (PLEG): container finished" podID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerID="d0ad0fef5e7283814b2682505787c1a18a6f7132ae9b9c875984c8a35707552e" exitCode=0 Jan 21 18:42:13 crc kubenswrapper[4823]: I0121 18:42:13.966208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerDied","Data":"d0ad0fef5e7283814b2682505787c1a18a6f7132ae9b9c875984c8a35707552e"} Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.593167 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.686454 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-catalog-content\") pod \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.686559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-utilities\") pod \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.686640 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65gjv\" (UniqueName: \"kubernetes.io/projected/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-kube-api-access-65gjv\") pod \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\" (UID: \"07819c2a-3464-4927-ae5b-f5c6cd64ddcf\") " Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.688896 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-utilities" (OuterVolumeSpecName: "utilities") pod "07819c2a-3464-4927-ae5b-f5c6cd64ddcf" (UID: "07819c2a-3464-4927-ae5b-f5c6cd64ddcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.693078 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-kube-api-access-65gjv" (OuterVolumeSpecName: "kube-api-access-65gjv") pod "07819c2a-3464-4927-ae5b-f5c6cd64ddcf" (UID: "07819c2a-3464-4927-ae5b-f5c6cd64ddcf"). InnerVolumeSpecName "kube-api-access-65gjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.728672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07819c2a-3464-4927-ae5b-f5c6cd64ddcf" (UID: "07819c2a-3464-4927-ae5b-f5c6cd64ddcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.789169 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65gjv\" (UniqueName: \"kubernetes.io/projected/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-kube-api-access-65gjv\") on node \"crc\" DevicePath \"\"" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.789224 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.789238 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07819c2a-3464-4927-ae5b-f5c6cd64ddcf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.978345 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfr6n" event={"ID":"07819c2a-3464-4927-ae5b-f5c6cd64ddcf","Type":"ContainerDied","Data":"5bd705450db77ea0dc6db7bb6c921bc1253661457d5d6a072d6a2808684af5be"} Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.978397 4823 scope.go:117] "RemoveContainer" containerID="d0ad0fef5e7283814b2682505787c1a18a6f7132ae9b9c875984c8a35707552e" Jan 21 18:42:14 crc kubenswrapper[4823]: I0121 18:42:14.978434 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfr6n" Jan 21 18:42:15 crc kubenswrapper[4823]: I0121 18:42:15.016916 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfr6n"] Jan 21 18:42:15 crc kubenswrapper[4823]: I0121 18:42:15.027130 4823 scope.go:117] "RemoveContainer" containerID="869f29042d3e577b50544edcbf88e6a3c03ae4078cc63833d42d0ce2ec289939" Jan 21 18:42:15 crc kubenswrapper[4823]: I0121 18:42:15.027562 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfr6n"] Jan 21 18:42:15 crc kubenswrapper[4823]: I0121 18:42:15.056537 4823 scope.go:117] "RemoveContainer" containerID="5a1fb77badec8e12bb0b99319826cbba975d4b37b2f808c17356094bf3bc4360" Jan 21 18:42:15 crc kubenswrapper[4823]: I0121 18:42:15.355992 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" path="/var/lib/kubelet/pods/07819c2a-3464-4927-ae5b-f5c6cd64ddcf/volumes" Jan 21 18:42:18 crc kubenswrapper[4823]: I0121 18:42:18.344093 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:42:18 crc kubenswrapper[4823]: E0121 18:42:18.344959 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:42:33 crc kubenswrapper[4823]: I0121 18:42:33.344931 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:42:33 crc kubenswrapper[4823]: E0121 18:42:33.345945 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:42:45 crc kubenswrapper[4823]: I0121 18:42:45.344319 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:42:45 crc kubenswrapper[4823]: E0121 18:42:45.345203 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:42:59 crc kubenswrapper[4823]: I0121 18:42:59.353025 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:42:59 crc kubenswrapper[4823]: E0121 18:42:59.353723 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:43:14 crc kubenswrapper[4823]: I0121 18:43:14.344064 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:43:14 crc kubenswrapper[4823]: E0121 18:43:14.344781 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:43:28 crc kubenswrapper[4823]: I0121 18:43:28.343985 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:43:28 crc kubenswrapper[4823]: E0121 18:43:28.344688 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:43:43 crc kubenswrapper[4823]: I0121 18:43:43.344518 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:43:43 crc kubenswrapper[4823]: E0121 18:43:43.345470 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.939771 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8xl2"] Jan 21 18:43:52 crc kubenswrapper[4823]: E0121 18:43:52.940790 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="registry-server" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.940808 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="registry-server" Jan 21 18:43:52 crc kubenswrapper[4823]: E0121 18:43:52.940844 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="extract-utilities" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.940871 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="extract-utilities" Jan 21 18:43:52 crc kubenswrapper[4823]: E0121 18:43:52.940896 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="extract-content" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.940904 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="extract-content" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.941141 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="07819c2a-3464-4927-ae5b-f5c6cd64ddcf" containerName="registry-server" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.943009 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:52 crc kubenswrapper[4823]: I0121 18:43:52.948886 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8xl2"] Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.054059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-utilities\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.054146 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-catalog-content\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.054261 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzzr\" (UniqueName: \"kubernetes.io/projected/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-kube-api-access-wjzzr\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.156516 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-catalog-content\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.156614 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzzr\" (UniqueName: \"kubernetes.io/projected/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-kube-api-access-wjzzr\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.156725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-utilities\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.157188 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-catalog-content\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.157247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-utilities\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.198400 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzzr\" (UniqueName: \"kubernetes.io/projected/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-kube-api-access-wjzzr\") pod \"redhat-operators-z8xl2\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.264621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.762431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8xl2"] Jan 21 18:43:53 crc kubenswrapper[4823]: I0121 18:43:53.843821 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerStarted","Data":"17534170b6f8dd8abab130d60d601f2e64b6d1f2ca58dda794d5acf9d737de4b"} Jan 21 18:43:54 crc kubenswrapper[4823]: E0121 18:43:54.216982 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5b610ee_02b8_4399_8f05_fbe3f35f07ec.slice/crio-conmon-b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:43:54 crc kubenswrapper[4823]: I0121 18:43:54.859739 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerID="b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384" exitCode=0 Jan 21 18:43:54 crc kubenswrapper[4823]: I0121 18:43:54.859808 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerDied","Data":"b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384"} Jan 21 18:43:56 crc kubenswrapper[4823]: I0121 18:43:56.344024 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:43:56 crc kubenswrapper[4823]: E0121 18:43:56.344574 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:43:56 crc kubenswrapper[4823]: I0121 18:43:56.880053 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerStarted","Data":"0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597"} Jan 21 18:43:58 crc kubenswrapper[4823]: I0121 18:43:58.907669 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerID="0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597" exitCode=0 Jan 21 18:43:58 crc kubenswrapper[4823]: I0121 18:43:58.907872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerDied","Data":"0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597"} Jan 21 18:43:59 crc kubenswrapper[4823]: I0121 18:43:59.920546 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerStarted","Data":"5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d"} Jan 21 18:43:59 crc kubenswrapper[4823]: I0121 18:43:59.946005 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8xl2" podStartSLOduration=3.498602884 podStartE2EDuration="7.94598592s" podCreationTimestamp="2026-01-21 18:43:52 +0000 UTC" firstStartedPulling="2026-01-21 18:43:54.862528066 +0000 UTC m=+5235.788658946" lastFinishedPulling="2026-01-21 18:43:59.309911122 +0000 UTC m=+5240.236041982" observedRunningTime="2026-01-21 18:43:59.941218372 +0000 UTC m=+5240.867349252" watchObservedRunningTime="2026-01-21 18:43:59.94598592 +0000 UTC m=+5240.872116780" Jan 21 18:44:03 crc kubenswrapper[4823]: I0121 18:44:03.265823 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:44:03 crc kubenswrapper[4823]: I0121 18:44:03.266362 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:44:04 crc kubenswrapper[4823]: I0121 18:44:04.795973 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8xl2" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="registry-server" probeResult="failure" output=< Jan 21 18:44:04 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 21 18:44:04 crc kubenswrapper[4823]: > Jan 21 18:44:09 crc kubenswrapper[4823]: I0121 18:44:09.351785 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:44:09 crc kubenswrapper[4823]: E0121 18:44:09.352603 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:44:13 crc kubenswrapper[4823]: I0121 18:44:13.320170 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:44:13 crc kubenswrapper[4823]: I0121 18:44:13.366628 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:44:13 crc kubenswrapper[4823]: I0121 18:44:13.566190 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8xl2"] Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.158123 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8xl2" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="registry-server" containerID="cri-o://5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d" gracePeriod=2 Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.652968 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.720440 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjzzr\" (UniqueName: \"kubernetes.io/projected/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-kube-api-access-wjzzr\") pod \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.720665 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-utilities\") pod \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.721661 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-utilities" (OuterVolumeSpecName: "utilities") pod "d5b610ee-02b8-4399-8f05-fbe3f35f07ec" (UID: "d5b610ee-02b8-4399-8f05-fbe3f35f07ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.721907 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-catalog-content\") pod \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\" (UID: \"d5b610ee-02b8-4399-8f05-fbe3f35f07ec\") " Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.723087 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.728836 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-kube-api-access-wjzzr" (OuterVolumeSpecName: "kube-api-access-wjzzr") pod "d5b610ee-02b8-4399-8f05-fbe3f35f07ec" (UID: "d5b610ee-02b8-4399-8f05-fbe3f35f07ec"). InnerVolumeSpecName "kube-api-access-wjzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.824894 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjzzr\" (UniqueName: \"kubernetes.io/projected/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-kube-api-access-wjzzr\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.841209 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5b610ee-02b8-4399-8f05-fbe3f35f07ec" (UID: "d5b610ee-02b8-4399-8f05-fbe3f35f07ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:44:15 crc kubenswrapper[4823]: I0121 18:44:15.926426 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b610ee-02b8-4399-8f05-fbe3f35f07ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.174588 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerID="5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d" exitCode=0 Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.174671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerDied","Data":"5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d"} Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.174991 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8xl2" event={"ID":"d5b610ee-02b8-4399-8f05-fbe3f35f07ec","Type":"ContainerDied","Data":"17534170b6f8dd8abab130d60d601f2e64b6d1f2ca58dda794d5acf9d737de4b"} Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.175019 4823 scope.go:117] "RemoveContainer" containerID="5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.174772 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8xl2" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.213147 4823 scope.go:117] "RemoveContainer" containerID="0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.216234 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8xl2"] Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.226694 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8xl2"] Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.461553 4823 scope.go:117] "RemoveContainer" containerID="b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.508362 4823 scope.go:117] "RemoveContainer" containerID="5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d" Jan 21 18:44:16 crc kubenswrapper[4823]: E0121 18:44:16.508775 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d\": container with ID starting with 5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d not found: ID does not exist" containerID="5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.508844 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d"} err="failed to get container status \"5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d\": rpc error: code = NotFound desc = could not find container \"5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d\": container with ID starting with 5b86dc8d41b6c9331ad5675e290066d8c4cdb12d9fc99422c27d5d9d2985398d not found: ID does not exist" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.508929 4823 scope.go:117] "RemoveContainer" containerID="0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597" Jan 21 18:44:16 crc kubenswrapper[4823]: E0121 18:44:16.509196 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597\": container with ID starting with 0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597 not found: ID does not exist" containerID="0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.509235 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597"} err="failed to get container status \"0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597\": rpc error: code = NotFound desc = could not find container \"0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597\": container with ID starting with 0057bf6e0f918f4e16df76a923be22be16abda77e2000721380e47bab6495597 not found: ID does not exist" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.509275 4823 scope.go:117] "RemoveContainer" containerID="b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384" Jan 21 18:44:16 crc kubenswrapper[4823]: E0121 18:44:16.509591 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384\": container with ID starting with b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384 not found: ID does not exist" containerID="b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384" Jan 21 18:44:16 crc kubenswrapper[4823]: I0121 18:44:16.509621 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384"} err="failed to get container status \"b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384\": rpc error: code = NotFound desc = could not find container \"b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384\": container with ID starting with b95dc60c0eb1bbdb843ab14c875fbc63a3bbe6fc72c42e13804bafbe26d34384 not found: ID does not exist" Jan 21 18:44:17 crc kubenswrapper[4823]: I0121 18:44:17.355814 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" path="/var/lib/kubelet/pods/d5b610ee-02b8-4399-8f05-fbe3f35f07ec/volumes" Jan 21 18:44:20 crc kubenswrapper[4823]: I0121 18:44:20.344513 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:44:20 crc kubenswrapper[4823]: E0121 18:44:20.345368 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:44:31 crc kubenswrapper[4823]: I0121 18:44:31.344565 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:44:31 crc kubenswrapper[4823]: E0121 18:44:31.345438 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:44:42 crc kubenswrapper[4823]: I0121 18:44:42.345591 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:44:42 crc kubenswrapper[4823]: E0121 18:44:42.347127 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:44:56 crc kubenswrapper[4823]: I0121 18:44:56.343665 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:44:57 crc kubenswrapper[4823]: I0121 18:44:57.563185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"22e7ab45ab5c64a08bf3189ccc361f989aa527602f5fac36ce02fd9f02fa92e3"} Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.153192 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m"] Jan 21 18:45:00 crc kubenswrapper[4823]: E0121 18:45:00.154174 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="extract-content" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.154187 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="extract-content" Jan 21 18:45:00 crc kubenswrapper[4823]: E0121 18:45:00.154220 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="registry-server" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.154227 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="registry-server" Jan 21 18:45:00 crc kubenswrapper[4823]: E0121 18:45:00.154247 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="extract-utilities" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.154253 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="extract-utilities" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.154441 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b610ee-02b8-4399-8f05-fbe3f35f07ec" containerName="registry-server" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.155315 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.158684 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.158771 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.175945 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m"] Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.269341 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l246r\" (UniqueName: \"kubernetes.io/projected/16315dec-367f-49f1-ae9c-7459bc8dbf8a-kube-api-access-l246r\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.269444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16315dec-367f-49f1-ae9c-7459bc8dbf8a-config-volume\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.269826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16315dec-367f-49f1-ae9c-7459bc8dbf8a-secret-volume\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.371295 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l246r\" (UniqueName: \"kubernetes.io/projected/16315dec-367f-49f1-ae9c-7459bc8dbf8a-kube-api-access-l246r\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.371489 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16315dec-367f-49f1-ae9c-7459bc8dbf8a-config-volume\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.371601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16315dec-367f-49f1-ae9c-7459bc8dbf8a-secret-volume\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.372846 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16315dec-367f-49f1-ae9c-7459bc8dbf8a-config-volume\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.387628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16315dec-367f-49f1-ae9c-7459bc8dbf8a-secret-volume\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.388349 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l246r\" (UniqueName: \"kubernetes.io/projected/16315dec-367f-49f1-ae9c-7459bc8dbf8a-kube-api-access-l246r\") pod \"collect-profiles-29483685-hvh8m\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.482624 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:00 crc kubenswrapper[4823]: I0121 18:45:00.977431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m"] Jan 21 18:45:00 crc kubenswrapper[4823]: W0121 18:45:00.983132 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16315dec_367f_49f1_ae9c_7459bc8dbf8a.slice/crio-e5a2d5a1be5a519a2fd5ae6623b5bccf614e66b0e09720c61c590eb637457c7d WatchSource:0}: Error finding container e5a2d5a1be5a519a2fd5ae6623b5bccf614e66b0e09720c61c590eb637457c7d: Status 404 returned error can't find the container with id e5a2d5a1be5a519a2fd5ae6623b5bccf614e66b0e09720c61c590eb637457c7d Jan 21 18:45:01 crc kubenswrapper[4823]: I0121 18:45:01.615317 4823 generic.go:334] "Generic (PLEG): container finished" podID="16315dec-367f-49f1-ae9c-7459bc8dbf8a" containerID="b35e994cc1550da01a52bef96c79eae75f89c8f2833743c75bec4fa98ac3ab66" exitCode=0 Jan 21 18:45:01 crc kubenswrapper[4823]: I0121 18:45:01.615388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" event={"ID":"16315dec-367f-49f1-ae9c-7459bc8dbf8a","Type":"ContainerDied","Data":"b35e994cc1550da01a52bef96c79eae75f89c8f2833743c75bec4fa98ac3ab66"} Jan 21 18:45:01 crc kubenswrapper[4823]: I0121 18:45:01.616507 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" event={"ID":"16315dec-367f-49f1-ae9c-7459bc8dbf8a","Type":"ContainerStarted","Data":"e5a2d5a1be5a519a2fd5ae6623b5bccf614e66b0e09720c61c590eb637457c7d"} Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.016917 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.128221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16315dec-367f-49f1-ae9c-7459bc8dbf8a-secret-volume\") pod \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.128448 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16315dec-367f-49f1-ae9c-7459bc8dbf8a-config-volume\") pod \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.128507 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l246r\" (UniqueName: \"kubernetes.io/projected/16315dec-367f-49f1-ae9c-7459bc8dbf8a-kube-api-access-l246r\") pod \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\" (UID: \"16315dec-367f-49f1-ae9c-7459bc8dbf8a\") " Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.130038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16315dec-367f-49f1-ae9c-7459bc8dbf8a-config-volume" (OuterVolumeSpecName: "config-volume") pod "16315dec-367f-49f1-ae9c-7459bc8dbf8a" (UID: "16315dec-367f-49f1-ae9c-7459bc8dbf8a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.135411 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16315dec-367f-49f1-ae9c-7459bc8dbf8a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16315dec-367f-49f1-ae9c-7459bc8dbf8a" (UID: "16315dec-367f-49f1-ae9c-7459bc8dbf8a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.137296 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16315dec-367f-49f1-ae9c-7459bc8dbf8a-kube-api-access-l246r" (OuterVolumeSpecName: "kube-api-access-l246r") pod "16315dec-367f-49f1-ae9c-7459bc8dbf8a" (UID: "16315dec-367f-49f1-ae9c-7459bc8dbf8a"). InnerVolumeSpecName "kube-api-access-l246r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.230493 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16315dec-367f-49f1-ae9c-7459bc8dbf8a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.230529 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l246r\" (UniqueName: \"kubernetes.io/projected/16315dec-367f-49f1-ae9c-7459bc8dbf8a-kube-api-access-l246r\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.230542 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16315dec-367f-49f1-ae9c-7459bc8dbf8a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.634554 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" event={"ID":"16315dec-367f-49f1-ae9c-7459bc8dbf8a","Type":"ContainerDied","Data":"e5a2d5a1be5a519a2fd5ae6623b5bccf614e66b0e09720c61c590eb637457c7d"} Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.634595 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a2d5a1be5a519a2fd5ae6623b5bccf614e66b0e09720c61c590eb637457c7d" Jan 21 18:45:03 crc kubenswrapper[4823]: I0121 18:45:03.634596 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hvh8m" Jan 21 18:45:04 crc kubenswrapper[4823]: I0121 18:45:04.100501 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h"] Jan 21 18:45:04 crc kubenswrapper[4823]: I0121 18:45:04.109807 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483640-hr25h"] Jan 21 18:45:05 crc kubenswrapper[4823]: I0121 18:45:05.376304 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0208cb78-6a49-4a77-bba2-d269d5c015a0" path="/var/lib/kubelet/pods/0208cb78-6a49-4a77-bba2-d269d5c015a0/volumes" Jan 21 18:45:30 crc kubenswrapper[4823]: I0121 18:45:30.775815 4823 scope.go:117] "RemoveContainer" containerID="31c24add75e28df7d6839888f9b1297e8febc39ff905b636cc131523f1ec2e78" Jan 21 18:45:39 crc kubenswrapper[4823]: I0121 18:45:39.989336 4823 generic.go:334] "Generic (PLEG): container finished" podID="0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" containerID="9f2877648488784246bb2d2d079277649debcb3a3e9036f27e442284c15d7dbf" exitCode=1 Jan 21 18:45:39 crc kubenswrapper[4823]: I0121 18:45:39.989456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f","Type":"ContainerDied","Data":"9f2877648488784246bb2d2d079277649debcb3a3e9036f27e442284c15d7dbf"} Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.401175 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.491611 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-config-data\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.491663 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config-secret\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.491774 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.491905 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lp6v\" (UniqueName: \"kubernetes.io/projected/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-kube-api-access-6lp6v\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.491931 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.491965 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-workdir\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.492062 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-temporary\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.492136 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ssh-key\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.492171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ca-certs\") pod \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\" (UID: \"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f\") " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.493218 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.494130 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-config-data" (OuterVolumeSpecName: "config-data") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.498044 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.510557 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-kube-api-access-6lp6v" (OuterVolumeSpecName: "kube-api-access-6lp6v") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "kube-api-access-6lp6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.527285 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.534984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.559519 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.566084 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.578919 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" (UID: "0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594095 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lp6v\" (UniqueName: \"kubernetes.io/projected/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-kube-api-access-6lp6v\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594136 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594147 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594158 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594170 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594179 4823 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594186 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594197 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.594223 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.617012 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 18:45:41 crc kubenswrapper[4823]: I0121 18:45:41.696763 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:42 crc kubenswrapper[4823]: I0121 18:45:42.011296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f","Type":"ContainerDied","Data":"f92797d626577d38462c12e5884108f2d7ec787263a2e8087ca8574d4d780162"} Jan 21 18:45:42 crc kubenswrapper[4823]: I0121 18:45:42.011342 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f92797d626577d38462c12e5884108f2d7ec787263a2e8087ca8574d4d780162" Jan 21 18:45:42 crc kubenswrapper[4823]: I0121 18:45:42.011408 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.320432 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 18:45:48 crc kubenswrapper[4823]: E0121 18:45:48.321502 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" containerName="tempest-tests-tempest-tests-runner" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.321520 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" containerName="tempest-tests-tempest-tests-runner" Jan 21 18:45:48 crc kubenswrapper[4823]: E0121 18:45:48.321552 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16315dec-367f-49f1-ae9c-7459bc8dbf8a" containerName="collect-profiles" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.321560 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="16315dec-367f-49f1-ae9c-7459bc8dbf8a" containerName="collect-profiles" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.321790 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="16315dec-367f-49f1-ae9c-7459bc8dbf8a" containerName="collect-profiles" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.321828 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f" containerName="tempest-tests-tempest-tests-runner" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.322649 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.325257 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-922mx" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.339835 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.433214 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7g2\" (UniqueName: \"kubernetes.io/projected/ec737905-ba30-4dbe-8d04-d5ee2c19d624-kube-api-access-br7g2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.433530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.535541 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.536000 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br7g2\" (UniqueName: \"kubernetes.io/projected/ec737905-ba30-4dbe-8d04-d5ee2c19d624-kube-api-access-br7g2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.537370 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.558503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br7g2\" (UniqueName: \"kubernetes.io/projected/ec737905-ba30-4dbe-8d04-d5ee2c19d624-kube-api-access-br7g2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.569096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ec737905-ba30-4dbe-8d04-d5ee2c19d624\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:48 crc kubenswrapper[4823]: I0121 18:45:48.649671 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 18:45:49 crc kubenswrapper[4823]: I0121 18:45:49.126712 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 18:45:49 crc kubenswrapper[4823]: I0121 18:45:49.136143 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:45:50 crc kubenswrapper[4823]: I0121 18:45:50.081675 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ec737905-ba30-4dbe-8d04-d5ee2c19d624","Type":"ContainerStarted","Data":"8a12dd146b838a4a7104c393dba06698c02eb0a5fc4bbb58b5f9d984514ee455"} Jan 21 18:45:51 crc kubenswrapper[4823]: I0121 18:45:51.093934 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ec737905-ba30-4dbe-8d04-d5ee2c19d624","Type":"ContainerStarted","Data":"488023d31f7c3f18c3035edd362be5d146f9d7d89c4876cb67faa618ce4bf202"} Jan 21 18:45:51 crc kubenswrapper[4823]: I0121 18:45:51.114592 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.276900203 podStartE2EDuration="3.114573169s" podCreationTimestamp="2026-01-21 18:45:48 +0000 UTC" firstStartedPulling="2026-01-21 18:45:49.135962761 +0000 UTC m=+5350.062093621" lastFinishedPulling="2026-01-21 18:45:49.973635727 +0000 UTC m=+5350.899766587" observedRunningTime="2026-01-21 18:45:51.105473124 +0000 UTC m=+5352.031603984" watchObservedRunningTime="2026-01-21 18:45:51.114573169 +0000 UTC m=+5352.040704039" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.102715 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4nwc2"] Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.105814 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.126760 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4nwc2"] Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.228000 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-catalog-content\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.228067 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf2gn\" (UniqueName: \"kubernetes.io/projected/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-kube-api-access-wf2gn\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.228133 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-utilities\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.330440 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-catalog-content\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.330518 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf2gn\" (UniqueName: \"kubernetes.io/projected/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-kube-api-access-wf2gn\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.330585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-utilities\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.331360 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-utilities\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.331633 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-catalog-content\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.356435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf2gn\" (UniqueName: \"kubernetes.io/projected/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-kube-api-access-wf2gn\") pod \"certified-operators-4nwc2\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.475795 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:13 crc kubenswrapper[4823]: I0121 18:46:13.979438 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4nwc2"] Jan 21 18:46:14 crc kubenswrapper[4823]: I0121 18:46:14.319675 4823 generic.go:334] "Generic (PLEG): container finished" podID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerID="b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636" exitCode=0 Jan 21 18:46:14 crc kubenswrapper[4823]: I0121 18:46:14.319938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerDied","Data":"b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636"} Jan 21 18:46:14 crc kubenswrapper[4823]: I0121 18:46:14.320069 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerStarted","Data":"b0c46e2bfd0a7bd47a76bcb6bba0e6b123946989de2fccfca543512bb46d5bec"} Jan 21 18:46:15 crc kubenswrapper[4823]: I0121 18:46:15.331207 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerStarted","Data":"7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed"} Jan 21 18:46:16 crc kubenswrapper[4823]: I0121 18:46:16.342520 4823 generic.go:334] "Generic (PLEG): container finished" podID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerID="7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed" exitCode=0 Jan 21 18:46:16 crc kubenswrapper[4823]: I0121 18:46:16.342630 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerDied","Data":"7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed"} Jan 21 18:46:17 crc kubenswrapper[4823]: I0121 18:46:17.357130 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerStarted","Data":"fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8"} Jan 21 18:46:17 crc kubenswrapper[4823]: I0121 18:46:17.375845 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4nwc2" podStartSLOduration=1.965982428 podStartE2EDuration="4.375821854s" podCreationTimestamp="2026-01-21 18:46:13 +0000 UTC" firstStartedPulling="2026-01-21 18:46:14.322711844 +0000 UTC m=+5375.248842704" lastFinishedPulling="2026-01-21 18:46:16.73255127 +0000 UTC m=+5377.658682130" observedRunningTime="2026-01-21 18:46:17.371694863 +0000 UTC m=+5378.297825733" watchObservedRunningTime="2026-01-21 18:46:17.375821854 +0000 UTC m=+5378.301952714" Jan 21 18:46:23 crc kubenswrapper[4823]: I0121 18:46:23.476948 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:23 crc kubenswrapper[4823]: I0121 18:46:23.477482 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:23 crc kubenswrapper[4823]: I0121 18:46:23.527206 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:24 crc kubenswrapper[4823]: I0121 18:46:24.483629 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:24 crc kubenswrapper[4823]: I0121 18:46:24.536622 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4nwc2"] Jan 21 18:46:26 crc kubenswrapper[4823]: I0121 18:46:26.454090 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4nwc2" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="registry-server" containerID="cri-o://fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8" gracePeriod=2 Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.425866 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.466539 4823 generic.go:334] "Generic (PLEG): container finished" podID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerID="fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8" exitCode=0 Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.466594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerDied","Data":"fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8"} Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.466611 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nwc2" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.466634 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nwc2" event={"ID":"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386","Type":"ContainerDied","Data":"b0c46e2bfd0a7bd47a76bcb6bba0e6b123946989de2fccfca543512bb46d5bec"} Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.466658 4823 scope.go:117] "RemoveContainer" containerID="fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.491011 4823 scope.go:117] "RemoveContainer" containerID="7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.511544 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-catalog-content\") pod \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.513339 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf2gn\" (UniqueName: \"kubernetes.io/projected/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-kube-api-access-wf2gn\") pod \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.513398 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-utilities\") pod \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\" (UID: \"69e1c00a-0c7f-4f87-a3f9-6aa7c3975386\") " Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.514136 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-utilities" (OuterVolumeSpecName: "utilities") pod "69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" (UID: "69e1c00a-0c7f-4f87-a3f9-6aa7c3975386"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.515618 4823 scope.go:117] "RemoveContainer" containerID="b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.520805 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-kube-api-access-wf2gn" (OuterVolumeSpecName: "kube-api-access-wf2gn") pod "69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" (UID: "69e1c00a-0c7f-4f87-a3f9-6aa7c3975386"). InnerVolumeSpecName "kube-api-access-wf2gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.556700 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" (UID: "69e1c00a-0c7f-4f87-a3f9-6aa7c3975386"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.609324 4823 scope.go:117] "RemoveContainer" containerID="fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8" Jan 21 18:46:27 crc kubenswrapper[4823]: E0121 18:46:27.609944 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8\": container with ID starting with fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8 not found: ID does not exist" containerID="fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.609980 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8"} err="failed to get container status \"fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8\": rpc error: code = NotFound desc = could not find container \"fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8\": container with ID starting with fab4db8c372ff06fd35e4945b9a4e2e709d4ca45280765dc0abbe4a5122fc3d8 not found: ID does not exist" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.610022 4823 scope.go:117] "RemoveContainer" containerID="7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed" Jan 21 18:46:27 crc kubenswrapper[4823]: E0121 18:46:27.610283 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed\": container with ID starting with 7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed not found: ID does not exist" containerID="7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.610307 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed"} err="failed to get container status \"7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed\": rpc error: code = NotFound desc = could not find container \"7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed\": container with ID starting with 7f00067a1dc756be0f5280623567c572093bd70fcae14a8fcea2ae8c0aff16ed not found: ID does not exist" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.610320 4823 scope.go:117] "RemoveContainer" containerID="b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636" Jan 21 18:46:27 crc kubenswrapper[4823]: E0121 18:46:27.610536 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636\": container with ID starting with b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636 not found: ID does not exist" containerID="b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.610564 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636"} err="failed to get container status \"b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636\": rpc error: code = NotFound desc = could not find container \"b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636\": container with ID starting with b6b75fd04a52602a88eb1ac69f8134e2acc5d42b92d14bfe663b778debc2c636 not found: ID does not exist" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.616535 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf2gn\" (UniqueName: \"kubernetes.io/projected/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-kube-api-access-wf2gn\") on node \"crc\" DevicePath \"\"" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.616585 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.616597 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.803370 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4nwc2"] Jan 21 18:46:27 crc kubenswrapper[4823]: I0121 18:46:27.813209 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4nwc2"] Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.745050 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t9pf9/must-gather-snbwp"] Jan 21 18:46:28 crc kubenswrapper[4823]: E0121 18:46:28.746118 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="registry-server" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.746136 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="registry-server" Jan 21 18:46:28 crc kubenswrapper[4823]: E0121 18:46:28.746156 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="extract-content" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.746163 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="extract-content" Jan 21 18:46:28 crc kubenswrapper[4823]: E0121 18:46:28.746191 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="extract-utilities" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.746200 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="extract-utilities" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.746457 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" containerName="registry-server" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.748305 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.749921 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t9pf9"/"kube-root-ca.crt" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.750233 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t9pf9"/"default-dockercfg-2kbv9" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.752447 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t9pf9"/"openshift-service-ca.crt" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.757255 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t9pf9/must-gather-snbwp"] Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.841272 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhtt2\" (UniqueName: \"kubernetes.io/projected/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-kube-api-access-jhtt2\") pod \"must-gather-snbwp\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.841411 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-must-gather-output\") pod \"must-gather-snbwp\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.943597 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhtt2\" (UniqueName: \"kubernetes.io/projected/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-kube-api-access-jhtt2\") pod \"must-gather-snbwp\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.943704 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-must-gather-output\") pod \"must-gather-snbwp\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.944101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-must-gather-output\") pod \"must-gather-snbwp\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:28 crc kubenswrapper[4823]: I0121 18:46:28.969987 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhtt2\" (UniqueName: \"kubernetes.io/projected/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-kube-api-access-jhtt2\") pod \"must-gather-snbwp\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:29 crc kubenswrapper[4823]: I0121 18:46:29.066245 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:46:29 crc kubenswrapper[4823]: I0121 18:46:29.355508 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e1c00a-0c7f-4f87-a3f9-6aa7c3975386" path="/var/lib/kubelet/pods/69e1c00a-0c7f-4f87-a3f9-6aa7c3975386/volumes" Jan 21 18:46:29 crc kubenswrapper[4823]: I0121 18:46:29.554320 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t9pf9/must-gather-snbwp"] Jan 21 18:46:29 crc kubenswrapper[4823]: W0121 18:46:29.562164 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode249ce50_4aec_4e92_8cca_0487b4cc1e5e.slice/crio-eeba2ad2186ccda65930ba3d7db75f1025b2d03a29ca2575c60fbff31f0d036b WatchSource:0}: Error finding container eeba2ad2186ccda65930ba3d7db75f1025b2d03a29ca2575c60fbff31f0d036b: Status 404 returned error can't find the container with id eeba2ad2186ccda65930ba3d7db75f1025b2d03a29ca2575c60fbff31f0d036b Jan 21 18:46:30 crc kubenswrapper[4823]: I0121 18:46:30.503993 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/must-gather-snbwp" event={"ID":"e249ce50-4aec-4e92-8cca-0487b4cc1e5e","Type":"ContainerStarted","Data":"eeba2ad2186ccda65930ba3d7db75f1025b2d03a29ca2575c60fbff31f0d036b"} Jan 21 18:46:34 crc kubenswrapper[4823]: I0121 18:46:34.543679 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/must-gather-snbwp" event={"ID":"e249ce50-4aec-4e92-8cca-0487b4cc1e5e","Type":"ContainerStarted","Data":"b7612281b8128624773550db4cd772ad26adf6a329e77c20d6ed064c03d135b4"} Jan 21 18:46:34 crc kubenswrapper[4823]: I0121 18:46:34.544240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/must-gather-snbwp" event={"ID":"e249ce50-4aec-4e92-8cca-0487b4cc1e5e","Type":"ContainerStarted","Data":"ab1c619706a7746df8a83c415e6e93102b86ef090ddae5dac7da3fd22f72d57e"} Jan 21 18:46:34 crc kubenswrapper[4823]: I0121 18:46:34.564081 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t9pf9/must-gather-snbwp" podStartSLOduration=2.717941759 podStartE2EDuration="6.564054565s" podCreationTimestamp="2026-01-21 18:46:28 +0000 UTC" firstStartedPulling="2026-01-21 18:46:29.565730239 +0000 UTC m=+5390.491861099" lastFinishedPulling="2026-01-21 18:46:33.411843045 +0000 UTC m=+5394.337973905" observedRunningTime="2026-01-21 18:46:34.558255982 +0000 UTC m=+5395.484386922" watchObservedRunningTime="2026-01-21 18:46:34.564054565 +0000 UTC m=+5395.490185455" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.706207 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-6cm4g"] Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.708502 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.836445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26mgp\" (UniqueName: \"kubernetes.io/projected/436f6e4f-58fc-43ce-a976-ada2ceabf929-kube-api-access-26mgp\") pod \"crc-debug-6cm4g\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.836523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/436f6e4f-58fc-43ce-a976-ada2ceabf929-host\") pod \"crc-debug-6cm4g\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.939157 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26mgp\" (UniqueName: \"kubernetes.io/projected/436f6e4f-58fc-43ce-a976-ada2ceabf929-kube-api-access-26mgp\") pod \"crc-debug-6cm4g\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.939968 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/436f6e4f-58fc-43ce-a976-ada2ceabf929-host\") pod \"crc-debug-6cm4g\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.940111 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/436f6e4f-58fc-43ce-a976-ada2ceabf929-host\") pod \"crc-debug-6cm4g\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:37 crc kubenswrapper[4823]: I0121 18:46:37.964283 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26mgp\" (UniqueName: \"kubernetes.io/projected/436f6e4f-58fc-43ce-a976-ada2ceabf929-kube-api-access-26mgp\") pod \"crc-debug-6cm4g\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:38 crc kubenswrapper[4823]: I0121 18:46:38.029614 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:46:38 crc kubenswrapper[4823]: W0121 18:46:38.069279 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod436f6e4f_58fc_43ce_a976_ada2ceabf929.slice/crio-ea309816f62b8e51d3118284a6086318bb9d861899603e7352a3077c7a116879 WatchSource:0}: Error finding container ea309816f62b8e51d3118284a6086318bb9d861899603e7352a3077c7a116879: Status 404 returned error can't find the container with id ea309816f62b8e51d3118284a6086318bb9d861899603e7352a3077c7a116879 Jan 21 18:46:38 crc kubenswrapper[4823]: I0121 18:46:38.585947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" event={"ID":"436f6e4f-58fc-43ce-a976-ada2ceabf929","Type":"ContainerStarted","Data":"ea309816f62b8e51d3118284a6086318bb9d861899603e7352a3077c7a116879"} Jan 21 18:46:49 crc kubenswrapper[4823]: I0121 18:46:49.710839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" event={"ID":"436f6e4f-58fc-43ce-a976-ada2ceabf929","Type":"ContainerStarted","Data":"cfb0f586c4da4e724fdea0b33425b7ac9765aec60fd61f1cfe45560e5ced566e"} Jan 21 18:46:49 crc kubenswrapper[4823]: I0121 18:46:49.758705 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" podStartSLOduration=2.3364437000000002 podStartE2EDuration="12.758672969s" podCreationTimestamp="2026-01-21 18:46:37 +0000 UTC" firstStartedPulling="2026-01-21 18:46:38.071204936 +0000 UTC m=+5398.997335796" lastFinishedPulling="2026-01-21 18:46:48.493434195 +0000 UTC m=+5409.419565065" observedRunningTime="2026-01-21 18:46:49.757358546 +0000 UTC m=+5410.683489416" watchObservedRunningTime="2026-01-21 18:46:49.758672969 +0000 UTC m=+5410.684803829" Jan 21 18:47:15 crc kubenswrapper[4823]: I0121 18:47:15.071140 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:47:15 crc kubenswrapper[4823]: I0121 18:47:15.071547 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:47:41 crc kubenswrapper[4823]: I0121 18:47:41.213056 4823 generic.go:334] "Generic (PLEG): container finished" podID="436f6e4f-58fc-43ce-a976-ada2ceabf929" containerID="cfb0f586c4da4e724fdea0b33425b7ac9765aec60fd61f1cfe45560e5ced566e" exitCode=0 Jan 21 18:47:41 crc kubenswrapper[4823]: I0121 18:47:41.213139 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" event={"ID":"436f6e4f-58fc-43ce-a976-ada2ceabf929","Type":"ContainerDied","Data":"cfb0f586c4da4e724fdea0b33425b7ac9765aec60fd61f1cfe45560e5ced566e"} Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.356160 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.392921 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-6cm4g"] Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.406296 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-6cm4g"] Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.547946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/436f6e4f-58fc-43ce-a976-ada2ceabf929-host\") pod \"436f6e4f-58fc-43ce-a976-ada2ceabf929\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.548052 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26mgp\" (UniqueName: \"kubernetes.io/projected/436f6e4f-58fc-43ce-a976-ada2ceabf929-kube-api-access-26mgp\") pod \"436f6e4f-58fc-43ce-a976-ada2ceabf929\" (UID: \"436f6e4f-58fc-43ce-a976-ada2ceabf929\") " Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.548051 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/436f6e4f-58fc-43ce-a976-ada2ceabf929-host" (OuterVolumeSpecName: "host") pod "436f6e4f-58fc-43ce-a976-ada2ceabf929" (UID: "436f6e4f-58fc-43ce-a976-ada2ceabf929"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.548616 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/436f6e4f-58fc-43ce-a976-ada2ceabf929-host\") on node \"crc\" DevicePath \"\"" Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.554039 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436f6e4f-58fc-43ce-a976-ada2ceabf929-kube-api-access-26mgp" (OuterVolumeSpecName: "kube-api-access-26mgp") pod "436f6e4f-58fc-43ce-a976-ada2ceabf929" (UID: "436f6e4f-58fc-43ce-a976-ada2ceabf929"). InnerVolumeSpecName "kube-api-access-26mgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:47:42 crc kubenswrapper[4823]: I0121 18:47:42.649691 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26mgp\" (UniqueName: \"kubernetes.io/projected/436f6e4f-58fc-43ce-a976-ada2ceabf929-kube-api-access-26mgp\") on node \"crc\" DevicePath \"\"" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.232432 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea309816f62b8e51d3118284a6086318bb9d861899603e7352a3077c7a116879" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.232644 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-6cm4g" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.355517 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436f6e4f-58fc-43ce-a976-ada2ceabf929" path="/var/lib/kubelet/pods/436f6e4f-58fc-43ce-a976-ada2ceabf929/volumes" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.574063 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-fjhp2"] Jan 21 18:47:43 crc kubenswrapper[4823]: E0121 18:47:43.574476 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436f6e4f-58fc-43ce-a976-ada2ceabf929" containerName="container-00" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.574488 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="436f6e4f-58fc-43ce-a976-ada2ceabf929" containerName="container-00" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.574673 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="436f6e4f-58fc-43ce-a976-ada2ceabf929" containerName="container-00" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.579476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.770018 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcdc\" (UniqueName: \"kubernetes.io/projected/02042b80-a430-430f-8939-1c58989d63b0-kube-api-access-zxcdc\") pod \"crc-debug-fjhp2\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.770343 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02042b80-a430-430f-8939-1c58989d63b0-host\") pod \"crc-debug-fjhp2\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.872188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcdc\" (UniqueName: \"kubernetes.io/projected/02042b80-a430-430f-8939-1c58989d63b0-kube-api-access-zxcdc\") pod \"crc-debug-fjhp2\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.872372 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02042b80-a430-430f-8939-1c58989d63b0-host\") pod \"crc-debug-fjhp2\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.872531 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02042b80-a430-430f-8939-1c58989d63b0-host\") pod \"crc-debug-fjhp2\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.891985 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcdc\" (UniqueName: \"kubernetes.io/projected/02042b80-a430-430f-8939-1c58989d63b0-kube-api-access-zxcdc\") pod \"crc-debug-fjhp2\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:43 crc kubenswrapper[4823]: I0121 18:47:43.895963 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:44 crc kubenswrapper[4823]: I0121 18:47:44.241966 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" event={"ID":"02042b80-a430-430f-8939-1c58989d63b0","Type":"ContainerStarted","Data":"0d0a46c58befdd25e6db33a66f1b548a247c4fded0c16cd9b1e3e4aa9212b42b"} Jan 21 18:47:44 crc kubenswrapper[4823]: I0121 18:47:44.242264 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" event={"ID":"02042b80-a430-430f-8939-1c58989d63b0","Type":"ContainerStarted","Data":"576b7cec28e7848a2efde353f595a05c4a75c9c0acceae5264eb7f740d40d80e"} Jan 21 18:47:44 crc kubenswrapper[4823]: I0121 18:47:44.263653 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" podStartSLOduration=1.263637012 podStartE2EDuration="1.263637012s" podCreationTimestamp="2026-01-21 18:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:47:44.254137438 +0000 UTC m=+5465.180268318" watchObservedRunningTime="2026-01-21 18:47:44.263637012 +0000 UTC m=+5465.189767872" Jan 21 18:47:45 crc kubenswrapper[4823]: I0121 18:47:45.070398 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:47:45 crc kubenswrapper[4823]: I0121 18:47:45.070684 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:47:45 crc kubenswrapper[4823]: I0121 18:47:45.254189 4823 generic.go:334] "Generic (PLEG): container finished" podID="02042b80-a430-430f-8939-1c58989d63b0" containerID="0d0a46c58befdd25e6db33a66f1b548a247c4fded0c16cd9b1e3e4aa9212b42b" exitCode=0 Jan 21 18:47:45 crc kubenswrapper[4823]: I0121 18:47:45.254258 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" event={"ID":"02042b80-a430-430f-8939-1c58989d63b0","Type":"ContainerDied","Data":"0d0a46c58befdd25e6db33a66f1b548a247c4fded0c16cd9b1e3e4aa9212b42b"} Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.388917 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.521079 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxcdc\" (UniqueName: \"kubernetes.io/projected/02042b80-a430-430f-8939-1c58989d63b0-kube-api-access-zxcdc\") pod \"02042b80-a430-430f-8939-1c58989d63b0\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.521139 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02042b80-a430-430f-8939-1c58989d63b0-host\") pod \"02042b80-a430-430f-8939-1c58989d63b0\" (UID: \"02042b80-a430-430f-8939-1c58989d63b0\") " Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.521290 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02042b80-a430-430f-8939-1c58989d63b0-host" (OuterVolumeSpecName: "host") pod "02042b80-a430-430f-8939-1c58989d63b0" (UID: "02042b80-a430-430f-8939-1c58989d63b0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.521888 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02042b80-a430-430f-8939-1c58989d63b0-host\") on node \"crc\" DevicePath \"\"" Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.546149 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02042b80-a430-430f-8939-1c58989d63b0-kube-api-access-zxcdc" (OuterVolumeSpecName: "kube-api-access-zxcdc") pod "02042b80-a430-430f-8939-1c58989d63b0" (UID: "02042b80-a430-430f-8939-1c58989d63b0"). InnerVolumeSpecName "kube-api-access-zxcdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.623606 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxcdc\" (UniqueName: \"kubernetes.io/projected/02042b80-a430-430f-8939-1c58989d63b0-kube-api-access-zxcdc\") on node \"crc\" DevicePath \"\"" Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.687770 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-fjhp2"] Jan 21 18:47:46 crc kubenswrapper[4823]: I0121 18:47:46.695736 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-fjhp2"] Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.274829 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="576b7cec28e7848a2efde353f595a05c4a75c9c0acceae5264eb7f740d40d80e" Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.274918 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-fjhp2" Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.355113 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02042b80-a430-430f-8939-1c58989d63b0" path="/var/lib/kubelet/pods/02042b80-a430-430f-8939-1c58989d63b0/volumes" Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.848009 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-5fh6g"] Jan 21 18:47:47 crc kubenswrapper[4823]: E0121 18:47:47.849257 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02042b80-a430-430f-8939-1c58989d63b0" containerName="container-00" Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.849341 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="02042b80-a430-430f-8939-1c58989d63b0" containerName="container-00" Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.849618 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="02042b80-a430-430f-8939-1c58989d63b0" containerName="container-00" Jan 21 18:47:47 crc kubenswrapper[4823]: I0121 18:47:47.850460 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.052376 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-host\") pod \"crc-debug-5fh6g\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.052458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wmf8\" (UniqueName: \"kubernetes.io/projected/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-kube-api-access-2wmf8\") pod \"crc-debug-5fh6g\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.153722 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wmf8\" (UniqueName: \"kubernetes.io/projected/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-kube-api-access-2wmf8\") pod \"crc-debug-5fh6g\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.154344 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-host\") pod \"crc-debug-5fh6g\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.154430 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-host\") pod \"crc-debug-5fh6g\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.180333 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wmf8\" (UniqueName: \"kubernetes.io/projected/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-kube-api-access-2wmf8\") pod \"crc-debug-5fh6g\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:48 crc kubenswrapper[4823]: I0121 18:47:48.471400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:49 crc kubenswrapper[4823]: I0121 18:47:49.319453 4823 generic.go:334] "Generic (PLEG): container finished" podID="9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" containerID="af59006ea5cc96f6bb68b3fec59253270fd7441e4d8a3df0fa49378627f39c45" exitCode=0 Jan 21 18:47:49 crc kubenswrapper[4823]: I0121 18:47:49.319509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" event={"ID":"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da","Type":"ContainerDied","Data":"af59006ea5cc96f6bb68b3fec59253270fd7441e4d8a3df0fa49378627f39c45"} Jan 21 18:47:49 crc kubenswrapper[4823]: I0121 18:47:49.319739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" event={"ID":"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da","Type":"ContainerStarted","Data":"b170b312a9eaad3d70d08dd3755ae30c8a58c385faeaf29329a59bf352338680"} Jan 21 18:47:49 crc kubenswrapper[4823]: I0121 18:47:49.367717 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-5fh6g"] Jan 21 18:47:49 crc kubenswrapper[4823]: I0121 18:47:49.379349 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t9pf9/crc-debug-5fh6g"] Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.442775 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.600221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-host\") pod \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.600338 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wmf8\" (UniqueName: \"kubernetes.io/projected/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-kube-api-access-2wmf8\") pod \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\" (UID: \"9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da\") " Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.600404 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-host" (OuterVolumeSpecName: "host") pod "9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" (UID: "9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.601110 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-host\") on node \"crc\" DevicePath \"\"" Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.613430 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-kube-api-access-2wmf8" (OuterVolumeSpecName: "kube-api-access-2wmf8") pod "9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" (UID: "9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da"). InnerVolumeSpecName "kube-api-access-2wmf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:47:50 crc kubenswrapper[4823]: I0121 18:47:50.705598 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wmf8\" (UniqueName: \"kubernetes.io/projected/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da-kube-api-access-2wmf8\") on node \"crc\" DevicePath \"\"" Jan 21 18:47:51 crc kubenswrapper[4823]: I0121 18:47:51.340054 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b170b312a9eaad3d70d08dd3755ae30c8a58c385faeaf29329a59bf352338680" Jan 21 18:47:51 crc kubenswrapper[4823]: I0121 18:47:51.340434 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/crc-debug-5fh6g" Jan 21 18:47:51 crc kubenswrapper[4823]: I0121 18:47:51.354517 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" path="/var/lib/kubelet/pods/9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da/volumes" Jan 21 18:48:13 crc kubenswrapper[4823]: I0121 18:48:13.794048 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c955fd8fb-p27tx_f6a34862-ff4b-4bc6-ba5c-803fdaeb722f/barbican-api/0.log" Jan 21 18:48:13 crc kubenswrapper[4823]: I0121 18:48:13.948389 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c955fd8fb-p27tx_f6a34862-ff4b-4bc6-ba5c-803fdaeb722f/barbican-api-log/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.036132 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-666c84c45d-q2ttq_c127d68c-e927-419c-a632-2a85db61e595/barbican-keystone-listener/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.071451 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-666c84c45d-q2ttq_c127d68c-e927-419c-a632-2a85db61e595/barbican-keystone-listener-log/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.217437 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-78cfb466c-qccf2_0509a27d-cceb-45b3-9595-b5e5489a3934/barbican-worker/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.294185 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-78cfb466c-qccf2_0509a27d-cceb-45b3-9595-b5e5489a3934/barbican-worker-log/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.453306 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-48h89_6acc67e0-e641-4a80-a9e0-e1373d9de675/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.546459 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5aebb1a-417f-4064-963c-1331aaf0f63b/ceilometer-central-agent/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.601218 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5aebb1a-417f-4064-963c-1331aaf0f63b/ceilometer-notification-agent/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.680135 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5aebb1a-417f-4064-963c-1331aaf0f63b/proxy-httpd/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.700623 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5aebb1a-417f-4064-963c-1331aaf0f63b/sg-core/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.904725 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ded7b85d-f0d3-4e9b-b121-cadd9b8488b6/cinder-api/0.log" Jan 21 18:48:14 crc kubenswrapper[4823]: I0121 18:48:14.937046 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ded7b85d-f0d3-4e9b-b121-cadd9b8488b6/cinder-api-log/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.070516 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.070568 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.070614 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.071375 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22e7ab45ab5c64a08bf3189ccc361f989aa527602f5fac36ce02fd9f02fa92e3"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.071432 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://22e7ab45ab5c64a08bf3189ccc361f989aa527602f5fac36ce02fd9f02fa92e3" gracePeriod=600 Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.102936 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a1799c60-9bb6-473f-a01f-490dfb36b396/cinder-scheduler/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.181598 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a1799c60-9bb6-473f-a01f-490dfb36b396/probe/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.269742 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-v4pdf_9dc1e78d-7f63-4a5e-bade-1f72df039863/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.471890 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5bmrk_7db429cb-d9dd-4122-b81b-239b40952922/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.552993 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="22e7ab45ab5c64a08bf3189ccc361f989aa527602f5fac36ce02fd9f02fa92e3" exitCode=0 Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.553036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"22e7ab45ab5c64a08bf3189ccc361f989aa527602f5fac36ce02fd9f02fa92e3"} Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.553062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668"} Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.553079 4823 scope.go:117] "RemoveContainer" containerID="b8d60c011e54c11a71710f4d7f32871efa4b766940debb0dbb0e70cb8c903c64" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.569704 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-mt792_01324a9e-711d-4754-8e1c-4d2ce8ae749a/init/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.759299 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-mt792_01324a9e-711d-4754-8e1c-4d2ce8ae749a/init/0.log" Jan 21 18:48:15 crc kubenswrapper[4823]: I0121 18:48:15.885777 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-mt792_01324a9e-711d-4754-8e1c-4d2ce8ae749a/dnsmasq-dns/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.083057 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4x725_faa0648b-80a8-4ccf-8295-897de512b670/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.248996 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c47b64d8-740b-4759-98aa-9d52e87030ae/glance-httpd/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.304216 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c47b64d8-740b-4759-98aa-9d52e87030ae/glance-log/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.461497 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cb937adb-bff8-4b3c-950f-cc5dddd41b95/glance-httpd/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.538629 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cb937adb-bff8-4b3c-950f-cc5dddd41b95/glance-log/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.674988 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bddd7dd6-67b6t_709299a1-f499-447b-a738-efe1b32c7abf/horizon/0.log" Jan 21 18:48:16 crc kubenswrapper[4823]: I0121 18:48:16.808743 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x5db2_3a7db3fc-7bbd-447a-b46d-bea0ab214e38/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:17 crc kubenswrapper[4823]: I0121 18:48:17.051123 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-9c6q9_a0725d92-cd2b-4258-b7dc-3c76b8f75eb0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:17 crc kubenswrapper[4823]: I0121 18:48:17.279023 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483641-vd8l7_832c4531-1b16-4ef3-b1b4-cb89dbe48cfb/keystone-cron/0.log" Jan 21 18:48:17 crc kubenswrapper[4823]: I0121 18:48:17.400060 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bddd7dd6-67b6t_709299a1-f499-447b-a738-efe1b32c7abf/horizon-log/0.log" Jan 21 18:48:17 crc kubenswrapper[4823]: I0121 18:48:17.547385 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6dc7996bf8-rhk6q_296d5316-1483-48f5-98f9-3b0ca03c4268/keystone-api/0.log" Jan 21 18:48:17 crc kubenswrapper[4823]: I0121 18:48:17.625369 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0f832894-8cfb-4f27-b494-21b6edc4516f/kube-state-metrics/0.log" Jan 21 18:48:17 crc kubenswrapper[4823]: I0121 18:48:17.812065 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-xbt96_543a718f-39d9-4c35-bd9e-888f739b9726/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:18 crc kubenswrapper[4823]: I0121 18:48:18.249693 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jxlqt_07d18401-f812-4b0d-91fe-b330f237f2f8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:18 crc kubenswrapper[4823]: I0121 18:48:18.323963 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-959c75cfc-2zd2j_250ad919-e0b5-4ae5-8f77-7631bb71aba0/neutron-httpd/0.log" Jan 21 18:48:18 crc kubenswrapper[4823]: I0121 18:48:18.362233 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-959c75cfc-2zd2j_250ad919-e0b5-4ae5-8f77-7631bb71aba0/neutron-api/0.log" Jan 21 18:48:18 crc kubenswrapper[4823]: I0121 18:48:18.879567 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ed613878-64dc-4f97-a498-8ef220d2d17e/nova-cell0-conductor-conductor/0.log" Jan 21 18:48:19 crc kubenswrapper[4823]: I0121 18:48:19.214490 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_df646a74-5ad5-41ea-8ef1-ab4f6287d876/nova-cell1-conductor-conductor/0.log" Jan 21 18:48:19 crc kubenswrapper[4823]: I0121 18:48:19.668570 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8e688551-fa1e-41c2-ae1e-18ca5073c34e/nova-api-log/0.log" Jan 21 18:48:19 crc kubenswrapper[4823]: I0121 18:48:19.694810 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_61085eff-9999-4d7e-b8a2-a1a548aa4cd6/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 18:48:19 crc kubenswrapper[4823]: I0121 18:48:19.920271 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sfkrc_ff90f069-d94f-4af4-958c-4e43099fe702/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:19 crc kubenswrapper[4823]: I0121 18:48:19.920305 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8e688551-fa1e-41c2-ae1e-18ca5073c34e/nova-api-api/0.log" Jan 21 18:48:20 crc kubenswrapper[4823]: I0121 18:48:20.080344 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_28ce89b1-4004-4486-872c-87d8965725da/nova-metadata-log/0.log" Jan 21 18:48:20 crc kubenswrapper[4823]: I0121 18:48:20.409078 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbb101cd-8034-422f-9016-d0baa0d9513b/mysql-bootstrap/0.log" Jan 21 18:48:20 crc kubenswrapper[4823]: I0121 18:48:20.539240 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e4384a0e-44ee-479f-bd81-f3b486c71da8/nova-scheduler-scheduler/0.log" Jan 21 18:48:20 crc kubenswrapper[4823]: I0121 18:48:20.627861 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbb101cd-8034-422f-9016-d0baa0d9513b/galera/0.log" Jan 21 18:48:20 crc kubenswrapper[4823]: I0121 18:48:20.641370 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbb101cd-8034-422f-9016-d0baa0d9513b/mysql-bootstrap/0.log" Jan 21 18:48:20 crc kubenswrapper[4823]: I0121 18:48:20.847514 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_350b339b-c723-4ff3-ab95-83e82c6c4d52/mysql-bootstrap/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.094366 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_350b339b-c723-4ff3-ab95-83e82c6c4d52/mysql-bootstrap/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.136215 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_350b339b-c723-4ff3-ab95-83e82c6c4d52/galera/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.321435 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_059733a2-b933-471e-b40b-3618874187a0/openstackclient/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.410980 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bfdrb_9588aa19-b204-450e-a781-2b3d119bd86e/ovn-controller/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.593816 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vcbxp_86467b0e-fd16-47e1-93d4-98ff2032c226/openstack-network-exporter/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.770145 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht698_80f6b9ff-4282-4118-8a46-acdae16c9a3d/ovsdb-server-init/0.log" Jan 21 18:48:21 crc kubenswrapper[4823]: I0121 18:48:21.985499 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht698_80f6b9ff-4282-4118-8a46-acdae16c9a3d/ovsdb-server-init/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.037378 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht698_80f6b9ff-4282-4118-8a46-acdae16c9a3d/ovs-vswitchd/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.042284 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht698_80f6b9ff-4282-4118-8a46-acdae16c9a3d/ovsdb-server/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.178156 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_28ce89b1-4004-4486-872c-87d8965725da/nova-metadata-metadata/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.302081 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jd9kk_dc8270e8-f21a-4e4e-986f-fb5eb95dcd8c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.406366 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9ecf31a3-66b0-40d6-8eab-93050f79c68a/openstack-network-exporter/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.564769 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c2e95955-387e-4ec0-a5b4-25a41b7cf9c9/openstack-network-exporter/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.567161 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9ecf31a3-66b0-40d6-8eab-93050f79c68a/ovn-northd/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.651126 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c2e95955-387e-4ec0-a5b4-25a41b7cf9c9/ovsdbserver-nb/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.797349 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf/ovsdbserver-sb/0.log" Jan 21 18:48:22 crc kubenswrapper[4823]: I0121 18:48:22.806199 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_96c4434d-b98a-4bc8-8d26-bd9a2a6d98bf/openstack-network-exporter/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.225191 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68db9cf4b4-kzfgq_eba57ea7-deed-4d3e-9327-f2baaf9e920d/placement-api/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.258127 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68db9cf4b4-kzfgq_eba57ea7-deed-4d3e-9327-f2baaf9e920d/placement-log/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.408120 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7dff15e-5ba2-490e-8660-5e0132b84f0f/init-config-reloader/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.569615 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7dff15e-5ba2-490e-8660-5e0132b84f0f/init-config-reloader/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.569820 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7dff15e-5ba2-490e-8660-5e0132b84f0f/config-reloader/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.605787 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7dff15e-5ba2-490e-8660-5e0132b84f0f/prometheus/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.655142 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7dff15e-5ba2-490e-8660-5e0132b84f0f/thanos-sidecar/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.822892 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ad50cd2-2f93-4a56-aa86-8b81e205531e/setup-container/0.log" Jan 21 18:48:23 crc kubenswrapper[4823]: I0121 18:48:23.972625 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ad50cd2-2f93-4a56-aa86-8b81e205531e/setup-container/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.082218 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b31dcb7b-15e2-4a14-bdab-d2887043e52a/setup-container/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.139516 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ad50cd2-2f93-4a56-aa86-8b81e205531e/rabbitmq/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.386609 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tcfrm_d5bfda69-3639-47e3-b736-c3693e826852/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.410166 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b31dcb7b-15e2-4a14-bdab-d2887043e52a/rabbitmq/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.452540 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b31dcb7b-15e2-4a14-bdab-d2887043e52a/setup-container/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.638589 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-nm2tl_e40a0e6f-16f4-4050-ad53-1c5678c23a87/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.749446 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6s5p5_d7f8af6b-470c-4ab7-9b20-b9c5b6a8675b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:24 crc kubenswrapper[4823]: I0121 18:48:24.881549 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rwc2k_2fc18e3f-f804-4b4a-abda-ccf74452f2c6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.014549 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-k225g_d36dcf94-ed56-418c-8e90-7a4da66a51d9/ssh-known-hosts-edpm-deployment/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.340317 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76d5f7bd8c-dgmn6_6b9e2d6c-5e93-426c-8c47-478d9ad360ed/proxy-server/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.458284 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76d5f7bd8c-dgmn6_6b9e2d6c-5e93-426c-8c47-478d9ad360ed/proxy-httpd/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.480235 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w2wtt_dfea14ef-6f13-4b56-99c5-74c8bb2d5e43/swift-ring-rebalance/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.585253 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/account-auditor/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.731600 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/account-reaper/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.746752 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/account-replicator/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.838347 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/account-server/0.log" Jan 21 18:48:25 crc kubenswrapper[4823]: I0121 18:48:25.952407 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/container-auditor/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.012216 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/container-replicator/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.092878 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/container-server/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.119072 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/container-updater/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.191430 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/object-auditor/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.265879 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/object-expirer/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.413551 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/object-server/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.424910 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/object-replicator/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.433876 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/object-updater/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.533561 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/rsync/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.599424 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1146f69b-d935-4a56-9f65-e96bf9539c14/swift-recon-cron/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.712094 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dkpcc_5f6be0ba-bfb2-4685-a2b4-24f4f8fe79cf/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:26 crc kubenswrapper[4823]: I0121 18:48:26.969407 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0110a9b7-0ded-42e4-b57b-7d6f8bf5f62f/tempest-tests-tempest-tests-runner/0.log" Jan 21 18:48:27 crc kubenswrapper[4823]: I0121 18:48:27.017972 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ec737905-ba30-4dbe-8d04-d5ee2c19d624/test-operator-logs-container/0.log" Jan 21 18:48:27 crc kubenswrapper[4823]: I0121 18:48:27.219338 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-x4mps_354d4cde-c7c6-4b49-9ec0-11c551fc6a7a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 18:48:28 crc kubenswrapper[4823]: I0121 18:48:28.027656 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_624bfb5c-23b4-4da4-ba5a-15db0c47cf2e/watcher-applier/0.log" Jan 21 18:48:28 crc kubenswrapper[4823]: I0121 18:48:28.572682 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_87b074e0-8609-4e85-a0df-ce3376e7b7df/watcher-api-log/0.log" Jan 21 18:48:29 crc kubenswrapper[4823]: I0121 18:48:29.241296 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f25102d9-f15b-4887-82a3-7380b9d3d062/watcher-decision-engine/0.log" Jan 21 18:48:30 crc kubenswrapper[4823]: I0121 18:48:30.538816 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_111d0907-497e-401f-a017-76534940920e/memcached/0.log" Jan 21 18:48:31 crc kubenswrapper[4823]: I0121 18:48:31.147819 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_87b074e0-8609-4e85-a0df-ce3376e7b7df/watcher-api/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.433978 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-r6txm_1a2264de-b154-4669-aeae-fd1e71b29b0d/manager/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.536806 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-z8pp2_41648873-3abc-47a7-8c4d-8c3a15bdf09e/manager/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.626296 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-pd8gs_1282a2fd-37f7-4fd4-9c38-69b9c87f2910/manager/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.729808 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/util/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.926700 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/util/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.971262 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/pull/0.log" Jan 21 18:48:54 crc kubenswrapper[4823]: I0121 18:48:54.972171 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/pull/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.111980 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/util/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.123524 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/pull/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.228700 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed61edce91ea2c704acb1a718d2ff383959dfd728de1981a1f5f70d25etnjbs_d1c2ba5e-6779-4a6b-be72-0c26004cd2f1/extract/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.385770 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-hx6mm_4923d7a0-77b7-4d86-a6fc-fff0e9a81766/manager/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.414329 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-2b4l6_9fe37a8e-6b17-4aad-8787-142c28faac52/manager/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.572090 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5srsl_c05b436d-2d25-449f-a929-9424e4b6021f/manager/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.790394 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-8b4ft_57081a15-e11e-4d49-b516-3f8ccabea011/manager/0.log" Jan 21 18:48:55 crc kubenswrapper[4823]: I0121 18:48:55.922825 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-5lr2z_b57ac152-55e4-445b-be02-c74b9fe96905/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.013591 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-thq22_85830ef7-1db0-47bf-b03f-0720fceda12b/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.102337 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-56n9m_94b91e49-7b7f-4e7a-bcdf-31d847d8c517/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.295183 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w8s6p_49bc570b-b84d-48a3-b322-95b9ece80f26/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.368803 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-pqfhj_5cc4aa26-0ff4-4f18-b6e1-e2fdcea53128/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.553876 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-rxvvc_ae942c2a-d0df-4bf2-8e76-ca95474ad50f/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.622014 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-cr4d5_6571a611-d208-419b-9304-5a6d6b8c1d1b/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.712760 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854fjj5d_b7356c47-15be-48c6-a78e-5389b077d2c6/manager/0.log" Jan 21 18:48:56 crc kubenswrapper[4823]: I0121 18:48:56.957343 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-65d788f684-rh8vw_a8052883-2f70-4dbf-b81c-2bdc0752c41e/operator/0.log" Jan 21 18:48:57 crc kubenswrapper[4823]: I0121 18:48:57.120093 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x8vd6_5a7085d9-6b10-4bfb-a9ca-30f9ea82081d/registry-server/0.log" Jan 21 18:48:57 crc kubenswrapper[4823]: I0121 18:48:57.382770 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9qqbg_9819367b-7d70-48cf-bdde-0c0e2ccf5fbd/manager/0.log" Jan 21 18:48:57 crc kubenswrapper[4823]: I0121 18:48:57.495718 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lds9d_2a4a8662-2615-4cf3-950d-2602ec921aaf/manager/0.log" Jan 21 18:48:57 crc kubenswrapper[4823]: I0121 18:48:57.756298 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-h462s_0d36d6af-75ff-4f6e-88aa-154e71609284/operator/0.log" Jan 21 18:48:57 crc kubenswrapper[4823]: I0121 18:48:57.959156 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-52tkk_33b3d08f-7b7d-4fa3-94e1-5391a80d6aaf/manager/0.log" Jan 21 18:48:58 crc kubenswrapper[4823]: I0121 18:48:58.166750 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-msnsz_b6b63b33-5320-4cc6-a4b8-c359e19cdfef/manager/0.log" Jan 21 18:48:58 crc kubenswrapper[4823]: I0121 18:48:58.198541 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-b4h92_3b0d5663-0000-4ddf-bd72-893997a79681/manager/0.log" Jan 21 18:48:58 crc kubenswrapper[4823]: I0121 18:48:58.294680 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cbfc7c9bd-x94nn_2dcb7191-3fd3-435f-bf67-713b3953c63f/manager/0.log" Jan 21 18:48:58 crc kubenswrapper[4823]: I0121 18:48:58.404152 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cdc5b758-zbxpz_29b92151-85da-4df7-a4f1-c8fb7e08a4cf/manager/0.log" Jan 21 18:49:18 crc kubenswrapper[4823]: I0121 18:49:18.622586 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ghcjn_7a64c737-acdd-4653-ad52-bcf69b3b69f8/control-plane-machine-set-operator/0.log" Jan 21 18:49:18 crc kubenswrapper[4823]: I0121 18:49:18.811095 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tnrwq_36ae04e1-ee34-492a-b2af-012a3fb66740/kube-rbac-proxy/0.log" Jan 21 18:49:18 crc kubenswrapper[4823]: I0121 18:49:18.846279 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tnrwq_36ae04e1-ee34-492a-b2af-012a3fb66740/machine-api-operator/0.log" Jan 21 18:49:30 crc kubenswrapper[4823]: I0121 18:49:30.727448 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rqrjt_419c5420-cb1c-474e-b276-67c540b41ec0/cert-manager-controller/0.log" Jan 21 18:49:30 crc kubenswrapper[4823]: I0121 18:49:30.930078 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-b7ddt_594754ac-1219-4432-acb9-6f3bb802b248/cert-manager-cainjector/0.log" Jan 21 18:49:30 crc kubenswrapper[4823]: I0121 18:49:30.935697 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-r49b9_dffc6c84-eb72-4348-a370-ade90b81ea5c/cert-manager-webhook/0.log" Jan 21 18:49:42 crc kubenswrapper[4823]: I0121 18:49:42.726634 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-bhp5f_42c0afb2-20ca-4f23-ab57-db27e84d475a/nmstate-console-plugin/0.log" Jan 21 18:49:42 crc kubenswrapper[4823]: I0121 18:49:42.867633 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ftkrm_8ab6a957-a277-4d36-86c5-48dcbe485a49/nmstate-handler/0.log" Jan 21 18:49:42 crc kubenswrapper[4823]: I0121 18:49:42.973290 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-p4sm7_539647c9-ef8e-4225-b49b-0f5da3130d48/kube-rbac-proxy/0.log" Jan 21 18:49:43 crc kubenswrapper[4823]: I0121 18:49:43.026973 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-p4sm7_539647c9-ef8e-4225-b49b-0f5da3130d48/nmstate-metrics/0.log" Jan 21 18:49:43 crc kubenswrapper[4823]: I0121 18:49:43.125594 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-sc268_702ac8ba-dd10-4b6e-978b-cf873a40ceb3/nmstate-operator/0.log" Jan 21 18:49:43 crc kubenswrapper[4823]: I0121 18:49:43.240541 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-9kdw7_fa05f98d-769b-41db-9bc1-5d8f19eff210/nmstate-webhook/0.log" Jan 21 18:49:56 crc kubenswrapper[4823]: I0121 18:49:56.432736 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vmjmj_654af63c-1337-4345-a2d6-4aa64462e8a9/prometheus-operator/0.log" Jan 21 18:49:56 crc kubenswrapper[4823]: I0121 18:49:56.620487 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p_497404e0-8944-46a8-9d67-a4334950f54c/prometheus-operator-admission-webhook/0.log" Jan 21 18:49:56 crc kubenswrapper[4823]: I0121 18:49:56.703533 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f_52f5c7d9-13fa-4e74-be3b-d4aee174b931/prometheus-operator-admission-webhook/0.log" Jan 21 18:49:56 crc kubenswrapper[4823]: I0121 18:49:56.812919 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l9hhp_7e55b3a4-cfae-41a1-a153-611e2e96dc75/operator/0.log" Jan 21 18:49:56 crc kubenswrapper[4823]: I0121 18:49:56.903531 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-76nsr_f7048824-b7bd-423e-b237-f4ccb584bb8a/perses-operator/0.log" Jan 21 18:50:09 crc kubenswrapper[4823]: I0121 18:50:09.859048 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xzv5z_146255fb-d7e1-463b-93a1-365c08693116/controller/0.log" Jan 21 18:50:09 crc kubenswrapper[4823]: I0121 18:50:09.898877 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xzv5z_146255fb-d7e1-463b-93a1-365c08693116/kube-rbac-proxy/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.248229 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zkrs4_3365a322-818c-4bc8-b15f-fb77d81d76ee/frr-k8s-webhook-server/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.278099 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-frr-files/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.561288 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-metrics/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.569506 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-reloader/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.609462 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-reloader/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.622460 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-frr-files/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.747452 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-reloader/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.785146 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-frr-files/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.837896 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-metrics/0.log" Jan 21 18:50:10 crc kubenswrapper[4823]: I0121 18:50:10.906067 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-metrics/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.078812 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-reloader/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.098994 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-frr-files/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.099112 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/cp-metrics/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.185682 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/controller/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.373406 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/frr-metrics/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.385434 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/kube-rbac-proxy/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.446735 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/kube-rbac-proxy-frr/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.644404 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/reloader/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.709057 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7b84d84c8d-gvj55_ed67552e-2644-4592-be91-31fb5ad23152/manager/0.log" Jan 21 18:50:11 crc kubenswrapper[4823]: I0121 18:50:11.959092 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6774cbb849-xcfsh_db56558d-45f7-4053-b14f-fb6c4ad56f3e/webhook-server/0.log" Jan 21 18:50:12 crc kubenswrapper[4823]: I0121 18:50:12.207447 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v7dzb_3be600b2-8746-4191-9e0c-e2007fa95890/kube-rbac-proxy/0.log" Jan 21 18:50:12 crc kubenswrapper[4823]: I0121 18:50:12.880079 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v7dzb_3be600b2-8746-4191-9e0c-e2007fa95890/speaker/0.log" Jan 21 18:50:13 crc kubenswrapper[4823]: I0121 18:50:13.269662 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zf8s8_8d053092-d968-421e-8413-7366fb2d5350/frr/0.log" Jan 21 18:50:15 crc kubenswrapper[4823]: I0121 18:50:15.070344 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:50:15 crc kubenswrapper[4823]: I0121 18:50:15.071238 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:50:26 crc kubenswrapper[4823]: I0121 18:50:26.785861 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/util/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.222914 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/pull/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.249244 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/util/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.283373 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/pull/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.489115 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/util/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.495199 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/pull/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.548849 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7fm2j_4186705f-031e-4b39-ae26-9835b3b8a619/extract/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.659912 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/util/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.881039 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/pull/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.919357 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/util/0.log" Jan 21 18:50:27 crc kubenswrapper[4823]: I0121 18:50:27.928354 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/pull/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.072233 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/util/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.076355 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/pull/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.121115 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xt4wf_84775bf8-065e-4f98-987c-733201a3d87c/extract/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.366729 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/util/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.497418 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/pull/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.553425 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/util/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.554577 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/pull/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.799115 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/extract/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.804236 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/util/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.816023 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlsgn_48b0d529-dbd0-4203-8af0-e7f42be18789/pull/0.log" Jan 21 18:50:28 crc kubenswrapper[4823]: I0121 18:50:28.999046 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/extract-utilities/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.220576 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/extract-content/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.253810 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/extract-utilities/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.262361 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/extract-content/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.434129 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/extract-content/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.452668 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/extract-utilities/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.656131 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/extract-utilities/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.802069 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vt4v_940db9a2-6944-4ffe-a14f-fd7925061d2a/registry-server/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.906169 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/extract-utilities/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.953597 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/extract-content/0.log" Jan 21 18:50:29 crc kubenswrapper[4823]: I0121 18:50:29.968748 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/extract-content/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.113751 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/extract-utilities/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.157546 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/extract-content/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.442335 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-x57gs_4a5e0f22-8962-482f-9848-31cc195390ca/marketplace-operator/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.552011 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/extract-utilities/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.789120 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/extract-utilities/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.852045 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/extract-content/0.log" Jan 21 18:50:30 crc kubenswrapper[4823]: I0121 18:50:30.893901 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/extract-content/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.054760 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slhm_c777ae33-b9b8-4f1a-a718-1c979513d33b/registry-server/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.078834 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/extract-utilities/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.080841 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/extract-content/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.262310 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nlg7m_3235d0ea-4f5c-4b04-b519-c8c7561a41ee/registry-server/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.270164 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/extract-utilities/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.405037 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/extract-content/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.439381 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/extract-utilities/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.440467 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/extract-content/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.594886 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/extract-utilities/0.log" Jan 21 18:50:31 crc kubenswrapper[4823]: I0121 18:50:31.617480 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/extract-content/0.log" Jan 21 18:50:32 crc kubenswrapper[4823]: I0121 18:50:32.277980 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nfcp9_07eebc10-1379-4c90-b506-ab6d548702f2/registry-server/0.log" Jan 21 18:50:44 crc kubenswrapper[4823]: I0121 18:50:44.953650 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-648b66fcb7-g2b4p_497404e0-8944-46a8-9d67-a4334950f54c/prometheus-operator-admission-webhook/0.log" Jan 21 18:50:44 crc kubenswrapper[4823]: I0121 18:50:44.985926 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-648b66fcb7-mt95f_52f5c7d9-13fa-4e74-be3b-d4aee174b931/prometheus-operator-admission-webhook/0.log" Jan 21 18:50:45 crc kubenswrapper[4823]: I0121 18:50:45.008016 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vmjmj_654af63c-1337-4345-a2d6-4aa64462e8a9/prometheus-operator/0.log" Jan 21 18:50:45 crc kubenswrapper[4823]: I0121 18:50:45.071171 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:50:45 crc kubenswrapper[4823]: I0121 18:50:45.071236 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:50:45 crc kubenswrapper[4823]: I0121 18:50:45.213867 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l9hhp_7e55b3a4-cfae-41a1-a153-611e2e96dc75/operator/0.log" Jan 21 18:50:45 crc kubenswrapper[4823]: I0121 18:50:45.233990 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-76nsr_f7048824-b7bd-423e-b237-f4ccb584bb8a/perses-operator/0.log" Jan 21 18:50:57 crc kubenswrapper[4823]: I0121 18:50:57.509216 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-blkpl container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 18:50:57 crc kubenswrapper[4823]: I0121 18:50:57.509769 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" podUID="7853d878-1dc8-4986-b9c8-e857f14a3230" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 18:50:57 crc kubenswrapper[4823]: I0121 18:50:57.532316 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-blkpl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 18:50:57 crc kubenswrapper[4823]: I0121 18:50:57.532380 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-blkpl" podUID="7853d878-1dc8-4986-b9c8-e857f14a3230" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.070251 4823 patch_prober.go:28] interesting pod/machine-config-daemon-4m4vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.070803 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.070894 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.071675 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668"} pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.071718 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerName="machine-config-daemon" containerID="cri-o://049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" gracePeriod=600 Jan 21 18:51:15 crc kubenswrapper[4823]: E0121 18:51:15.201996 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.712413 4823 generic.go:334] "Generic (PLEG): container finished" podID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" exitCode=0 Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.712483 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerDied","Data":"049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668"} Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.712543 4823 scope.go:117] "RemoveContainer" containerID="22e7ab45ab5c64a08bf3189ccc361f989aa527602f5fac36ce02fd9f02fa92e3" Jan 21 18:51:15 crc kubenswrapper[4823]: I0121 18:51:15.713240 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:51:15 crc kubenswrapper[4823]: E0121 18:51:15.713625 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:51:27 crc kubenswrapper[4823]: I0121 18:51:27.344048 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:51:27 crc kubenswrapper[4823]: E0121 18:51:27.345075 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:51:40 crc kubenswrapper[4823]: I0121 18:51:40.344034 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:51:40 crc kubenswrapper[4823]: E0121 18:51:40.344984 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.395745 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7njg"] Jan 21 18:51:53 crc kubenswrapper[4823]: E0121 18:51:53.396794 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" containerName="container-00" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.396810 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" containerName="container-00" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.397074 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9dd9ca-8eb9-4de5-bb53-c6e905fff7da" containerName="container-00" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.398821 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.410264 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7njg"] Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.543417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhww\" (UniqueName: \"kubernetes.io/projected/a983c12e-ec39-4353-9cdd-7ffa539872c5-kube-api-access-wvhww\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.543560 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-utilities\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.543626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-catalog-content\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.645454 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-utilities\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.645570 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-catalog-content\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.645630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvhww\" (UniqueName: \"kubernetes.io/projected/a983c12e-ec39-4353-9cdd-7ffa539872c5-kube-api-access-wvhww\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.646385 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-utilities\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.646664 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-catalog-content\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.668124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvhww\" (UniqueName: \"kubernetes.io/projected/a983c12e-ec39-4353-9cdd-7ffa539872c5-kube-api-access-wvhww\") pod \"community-operators-m7njg\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:53 crc kubenswrapper[4823]: I0121 18:51:53.748498 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:51:54 crc kubenswrapper[4823]: I0121 18:51:54.344361 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:51:54 crc kubenswrapper[4823]: E0121 18:51:54.344936 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:51:54 crc kubenswrapper[4823]: I0121 18:51:54.449177 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7njg"] Jan 21 18:51:55 crc kubenswrapper[4823]: I0121 18:51:55.097206 4823 generic.go:334] "Generic (PLEG): container finished" podID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerID="536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f" exitCode=0 Jan 21 18:51:55 crc kubenswrapper[4823]: I0121 18:51:55.097275 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerDied","Data":"536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f"} Jan 21 18:51:55 crc kubenswrapper[4823]: I0121 18:51:55.097527 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerStarted","Data":"c1d952f43ffdaec0fb8ae41204bdcb69d31914e362dd928c035668393ff14db0"} Jan 21 18:51:55 crc kubenswrapper[4823]: I0121 18:51:55.102867 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:51:58 crc kubenswrapper[4823]: I0121 18:51:58.124962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerStarted","Data":"52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567"} Jan 21 18:51:59 crc kubenswrapper[4823]: I0121 18:51:59.143702 4823 generic.go:334] "Generic (PLEG): container finished" podID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerID="52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567" exitCode=0 Jan 21 18:51:59 crc kubenswrapper[4823]: I0121 18:51:59.144029 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerDied","Data":"52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567"} Jan 21 18:52:00 crc kubenswrapper[4823]: I0121 18:52:00.158796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerStarted","Data":"a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a"} Jan 21 18:52:00 crc kubenswrapper[4823]: I0121 18:52:00.185119 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7njg" podStartSLOduration=2.765094466 podStartE2EDuration="7.185091617s" podCreationTimestamp="2026-01-21 18:51:53 +0000 UTC" firstStartedPulling="2026-01-21 18:51:55.102615338 +0000 UTC m=+5716.028746198" lastFinishedPulling="2026-01-21 18:51:59.522612489 +0000 UTC m=+5720.448743349" observedRunningTime="2026-01-21 18:52:00.178365231 +0000 UTC m=+5721.104496091" watchObservedRunningTime="2026-01-21 18:52:00.185091617 +0000 UTC m=+5721.111222477" Jan 21 18:52:03 crc kubenswrapper[4823]: I0121 18:52:03.748629 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:52:03 crc kubenswrapper[4823]: I0121 18:52:03.751570 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:52:03 crc kubenswrapper[4823]: I0121 18:52:03.810503 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:52:04 crc kubenswrapper[4823]: I0121 18:52:04.268460 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:52:04 crc kubenswrapper[4823]: I0121 18:52:04.318147 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7njg"] Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.220768 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7njg" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="registry-server" containerID="cri-o://a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a" gracePeriod=2 Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.736678 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.828683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-catalog-content\") pod \"a983c12e-ec39-4353-9cdd-7ffa539872c5\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.828783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-utilities\") pod \"a983c12e-ec39-4353-9cdd-7ffa539872c5\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.828825 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvhww\" (UniqueName: \"kubernetes.io/projected/a983c12e-ec39-4353-9cdd-7ffa539872c5-kube-api-access-wvhww\") pod \"a983c12e-ec39-4353-9cdd-7ffa539872c5\" (UID: \"a983c12e-ec39-4353-9cdd-7ffa539872c5\") " Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.830534 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-utilities" (OuterVolumeSpecName: "utilities") pod "a983c12e-ec39-4353-9cdd-7ffa539872c5" (UID: "a983c12e-ec39-4353-9cdd-7ffa539872c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.857227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a983c12e-ec39-4353-9cdd-7ffa539872c5-kube-api-access-wvhww" (OuterVolumeSpecName: "kube-api-access-wvhww") pod "a983c12e-ec39-4353-9cdd-7ffa539872c5" (UID: "a983c12e-ec39-4353-9cdd-7ffa539872c5"). InnerVolumeSpecName "kube-api-access-wvhww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.894944 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a983c12e-ec39-4353-9cdd-7ffa539872c5" (UID: "a983c12e-ec39-4353-9cdd-7ffa539872c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.932053 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.932093 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvhww\" (UniqueName: \"kubernetes.io/projected/a983c12e-ec39-4353-9cdd-7ffa539872c5-kube-api-access-wvhww\") on node \"crc\" DevicePath \"\"" Jan 21 18:52:06 crc kubenswrapper[4823]: I0121 18:52:06.932112 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a983c12e-ec39-4353-9cdd-7ffa539872c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.235018 4823 generic.go:334] "Generic (PLEG): container finished" podID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerID="a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a" exitCode=0 Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.235076 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7njg" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.235085 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerDied","Data":"a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a"} Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.236467 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7njg" event={"ID":"a983c12e-ec39-4353-9cdd-7ffa539872c5","Type":"ContainerDied","Data":"c1d952f43ffdaec0fb8ae41204bdcb69d31914e362dd928c035668393ff14db0"} Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.236514 4823 scope.go:117] "RemoveContainer" containerID="a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.288186 4823 scope.go:117] "RemoveContainer" containerID="52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.290810 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7njg"] Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.303609 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7njg"] Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.314336 4823 scope.go:117] "RemoveContainer" containerID="536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.355732 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" path="/var/lib/kubelet/pods/a983c12e-ec39-4353-9cdd-7ffa539872c5/volumes" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.378737 4823 scope.go:117] "RemoveContainer" containerID="a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a" Jan 21 18:52:07 crc kubenswrapper[4823]: E0121 18:52:07.379156 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a\": container with ID starting with a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a not found: ID does not exist" containerID="a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.379186 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a"} err="failed to get container status \"a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a\": rpc error: code = NotFound desc = could not find container \"a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a\": container with ID starting with a4e26be1f9ca52a7a3681e98931023967d0ea8dcd149791f8577a5709e4af59a not found: ID does not exist" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.379204 4823 scope.go:117] "RemoveContainer" containerID="52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567" Jan 21 18:52:07 crc kubenswrapper[4823]: E0121 18:52:07.379437 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567\": container with ID starting with 52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567 not found: ID does not exist" containerID="52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.379458 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567"} err="failed to get container status \"52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567\": rpc error: code = NotFound desc = could not find container \"52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567\": container with ID starting with 52de907f794c0ddc3bf3e5ec6457b7d3cc84445a3d3339d7a39e4c3e4f0ce567 not found: ID does not exist" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.379470 4823 scope.go:117] "RemoveContainer" containerID="536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f" Jan 21 18:52:07 crc kubenswrapper[4823]: E0121 18:52:07.379765 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f\": container with ID starting with 536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f not found: ID does not exist" containerID="536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f" Jan 21 18:52:07 crc kubenswrapper[4823]: I0121 18:52:07.379792 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f"} err="failed to get container status \"536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f\": rpc error: code = NotFound desc = could not find container \"536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f\": container with ID starting with 536e6410b7769f11dbdfce21945f776aabf824db3a3b0352fa2f323add79d07f not found: ID does not exist" Jan 21 18:52:09 crc kubenswrapper[4823]: I0121 18:52:09.358341 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:52:09 crc kubenswrapper[4823]: E0121 18:52:09.359023 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:52:21 crc kubenswrapper[4823]: I0121 18:52:21.345057 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:52:21 crc kubenswrapper[4823]: E0121 18:52:21.345761 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:52:35 crc kubenswrapper[4823]: I0121 18:52:35.347474 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:52:35 crc kubenswrapper[4823]: E0121 18:52:35.350308 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:52:46 crc kubenswrapper[4823]: I0121 18:52:46.619717 4823 generic.go:334] "Generic (PLEG): container finished" podID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerID="ab1c619706a7746df8a83c415e6e93102b86ef090ddae5dac7da3fd22f72d57e" exitCode=0 Jan 21 18:52:46 crc kubenswrapper[4823]: I0121 18:52:46.619815 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t9pf9/must-gather-snbwp" event={"ID":"e249ce50-4aec-4e92-8cca-0487b4cc1e5e","Type":"ContainerDied","Data":"ab1c619706a7746df8a83c415e6e93102b86ef090ddae5dac7da3fd22f72d57e"} Jan 21 18:52:46 crc kubenswrapper[4823]: I0121 18:52:46.621005 4823 scope.go:117] "RemoveContainer" containerID="ab1c619706a7746df8a83c415e6e93102b86ef090ddae5dac7da3fd22f72d57e" Jan 21 18:52:47 crc kubenswrapper[4823]: I0121 18:52:47.234416 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t9pf9_must-gather-snbwp_e249ce50-4aec-4e92-8cca-0487b4cc1e5e/gather/0.log" Jan 21 18:52:49 crc kubenswrapper[4823]: I0121 18:52:49.357841 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:52:49 crc kubenswrapper[4823]: E0121 18:52:49.358457 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.519236 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t9pf9/must-gather-snbwp"] Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.522111 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-t9pf9/must-gather-snbwp" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="copy" containerID="cri-o://b7612281b8128624773550db4cd772ad26adf6a329e77c20d6ed064c03d135b4" gracePeriod=2 Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.569100 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t9pf9/must-gather-snbwp"] Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.721819 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t9pf9_must-gather-snbwp_e249ce50-4aec-4e92-8cca-0487b4cc1e5e/copy/0.log" Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.722708 4823 generic.go:334] "Generic (PLEG): container finished" podID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerID="b7612281b8128624773550db4cd772ad26adf6a329e77c20d6ed064c03d135b4" exitCode=143 Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.986424 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t9pf9_must-gather-snbwp_e249ce50-4aec-4e92-8cca-0487b4cc1e5e/copy/0.log" Jan 21 18:52:55 crc kubenswrapper[4823]: I0121 18:52:55.987170 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.080676 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhtt2\" (UniqueName: \"kubernetes.io/projected/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-kube-api-access-jhtt2\") pod \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.080947 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-must-gather-output\") pod \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\" (UID: \"e249ce50-4aec-4e92-8cca-0487b4cc1e5e\") " Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.088091 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-kube-api-access-jhtt2" (OuterVolumeSpecName: "kube-api-access-jhtt2") pod "e249ce50-4aec-4e92-8cca-0487b4cc1e5e" (UID: "e249ce50-4aec-4e92-8cca-0487b4cc1e5e"). InnerVolumeSpecName "kube-api-access-jhtt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.183169 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhtt2\" (UniqueName: \"kubernetes.io/projected/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-kube-api-access-jhtt2\") on node \"crc\" DevicePath \"\"" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.255243 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e249ce50-4aec-4e92-8cca-0487b4cc1e5e" (UID: "e249ce50-4aec-4e92-8cca-0487b4cc1e5e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.285728 4823 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e249ce50-4aec-4e92-8cca-0487b4cc1e5e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.733057 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t9pf9_must-gather-snbwp_e249ce50-4aec-4e92-8cca-0487b4cc1e5e/copy/0.log" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.733477 4823 scope.go:117] "RemoveContainer" containerID="b7612281b8128624773550db4cd772ad26adf6a329e77c20d6ed064c03d135b4" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.733515 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t9pf9/must-gather-snbwp" Jan 21 18:52:56 crc kubenswrapper[4823]: I0121 18:52:56.757301 4823 scope.go:117] "RemoveContainer" containerID="ab1c619706a7746df8a83c415e6e93102b86ef090ddae5dac7da3fd22f72d57e" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.244154 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d27mr"] Jan 21 18:52:57 crc kubenswrapper[4823]: E0121 18:52:57.244657 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="extract-utilities" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.244681 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="extract-utilities" Jan 21 18:52:57 crc kubenswrapper[4823]: E0121 18:52:57.244699 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="gather" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.244707 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="gather" Jan 21 18:52:57 crc kubenswrapper[4823]: E0121 18:52:57.244745 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="registry-server" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.244755 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="registry-server" Jan 21 18:52:57 crc kubenswrapper[4823]: E0121 18:52:57.244775 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="extract-content" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.244783 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="extract-content" Jan 21 18:52:57 crc kubenswrapper[4823]: E0121 18:52:57.244799 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="copy" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.244808 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="copy" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.245119 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="gather" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.245159 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" containerName="copy" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.245168 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a983c12e-ec39-4353-9cdd-7ffa539872c5" containerName="registry-server" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.246753 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.256659 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27mr"] Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.313574 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-catalog-content\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.313780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtgp\" (UniqueName: \"kubernetes.io/projected/dba8050d-d825-44b7-9935-932f13f919da-kube-api-access-pxtgp\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.313844 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-utilities\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.356115 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e249ce50-4aec-4e92-8cca-0487b4cc1e5e" path="/var/lib/kubelet/pods/e249ce50-4aec-4e92-8cca-0487b4cc1e5e/volumes" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.415540 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtgp\" (UniqueName: \"kubernetes.io/projected/dba8050d-d825-44b7-9935-932f13f919da-kube-api-access-pxtgp\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.415606 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-utilities\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.415707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-catalog-content\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.416552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-utilities\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.416702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-catalog-content\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.437552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtgp\" (UniqueName: \"kubernetes.io/projected/dba8050d-d825-44b7-9935-932f13f919da-kube-api-access-pxtgp\") pod \"redhat-marketplace-d27mr\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:57 crc kubenswrapper[4823]: I0121 18:52:57.573643 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:52:58 crc kubenswrapper[4823]: I0121 18:52:58.134901 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27mr"] Jan 21 18:52:58 crc kubenswrapper[4823]: I0121 18:52:58.778154 4823 generic.go:334] "Generic (PLEG): container finished" podID="dba8050d-d825-44b7-9935-932f13f919da" containerID="c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4" exitCode=0 Jan 21 18:52:58 crc kubenswrapper[4823]: I0121 18:52:58.778201 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerDied","Data":"c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4"} Jan 21 18:52:58 crc kubenswrapper[4823]: I0121 18:52:58.778450 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerStarted","Data":"8493ec27c4ebfaf7393c9f40e6a4a878a8cf04ee43b0b667a1eb991b7b21b9c7"} Jan 21 18:52:59 crc kubenswrapper[4823]: I0121 18:52:59.794371 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerStarted","Data":"17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34"} Jan 21 18:53:00 crc kubenswrapper[4823]: I0121 18:53:00.813181 4823 generic.go:334] "Generic (PLEG): container finished" podID="dba8050d-d825-44b7-9935-932f13f919da" containerID="17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34" exitCode=0 Jan 21 18:53:00 crc kubenswrapper[4823]: I0121 18:53:00.813458 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerDied","Data":"17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34"} Jan 21 18:53:01 crc kubenswrapper[4823]: I0121 18:53:01.826357 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerStarted","Data":"4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6"} Jan 21 18:53:01 crc kubenswrapper[4823]: I0121 18:53:01.854075 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d27mr" podStartSLOduration=2.393029866 podStartE2EDuration="4.854045484s" podCreationTimestamp="2026-01-21 18:52:57 +0000 UTC" firstStartedPulling="2026-01-21 18:52:58.780683355 +0000 UTC m=+5779.706814215" lastFinishedPulling="2026-01-21 18:53:01.241698973 +0000 UTC m=+5782.167829833" observedRunningTime="2026-01-21 18:53:01.84574242 +0000 UTC m=+5782.771873270" watchObservedRunningTime="2026-01-21 18:53:01.854045484 +0000 UTC m=+5782.780176364" Jan 21 18:53:02 crc kubenswrapper[4823]: I0121 18:53:02.345781 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:53:02 crc kubenswrapper[4823]: E0121 18:53:02.346353 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:53:07 crc kubenswrapper[4823]: I0121 18:53:07.574231 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:53:07 crc kubenswrapper[4823]: I0121 18:53:07.574806 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:53:07 crc kubenswrapper[4823]: I0121 18:53:07.628504 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:53:07 crc kubenswrapper[4823]: I0121 18:53:07.929310 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:53:07 crc kubenswrapper[4823]: I0121 18:53:07.987463 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27mr"] Jan 21 18:53:09 crc kubenswrapper[4823]: I0121 18:53:09.901436 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d27mr" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="registry-server" containerID="cri-o://4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6" gracePeriod=2 Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.548746 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.631443 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-catalog-content\") pod \"dba8050d-d825-44b7-9935-932f13f919da\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.631657 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtgp\" (UniqueName: \"kubernetes.io/projected/dba8050d-d825-44b7-9935-932f13f919da-kube-api-access-pxtgp\") pod \"dba8050d-d825-44b7-9935-932f13f919da\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.631775 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-utilities\") pod \"dba8050d-d825-44b7-9935-932f13f919da\" (UID: \"dba8050d-d825-44b7-9935-932f13f919da\") " Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.632958 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-utilities" (OuterVolumeSpecName: "utilities") pod "dba8050d-d825-44b7-9935-932f13f919da" (UID: "dba8050d-d825-44b7-9935-932f13f919da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.638630 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba8050d-d825-44b7-9935-932f13f919da-kube-api-access-pxtgp" (OuterVolumeSpecName: "kube-api-access-pxtgp") pod "dba8050d-d825-44b7-9935-932f13f919da" (UID: "dba8050d-d825-44b7-9935-932f13f919da"). InnerVolumeSpecName "kube-api-access-pxtgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.661532 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dba8050d-d825-44b7-9935-932f13f919da" (UID: "dba8050d-d825-44b7-9935-932f13f919da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.735161 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtgp\" (UniqueName: \"kubernetes.io/projected/dba8050d-d825-44b7-9935-932f13f919da-kube-api-access-pxtgp\") on node \"crc\" DevicePath \"\"" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.735208 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.735219 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dba8050d-d825-44b7-9935-932f13f919da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.913902 4823 generic.go:334] "Generic (PLEG): container finished" podID="dba8050d-d825-44b7-9935-932f13f919da" containerID="4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6" exitCode=0 Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.913962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerDied","Data":"4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6"} Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.913991 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27mr" event={"ID":"dba8050d-d825-44b7-9935-932f13f919da","Type":"ContainerDied","Data":"8493ec27c4ebfaf7393c9f40e6a4a878a8cf04ee43b0b667a1eb991b7b21b9c7"} Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.913988 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d27mr" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.914006 4823 scope.go:117] "RemoveContainer" containerID="4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.943156 4823 scope.go:117] "RemoveContainer" containerID="17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.963558 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27mr"] Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.964480 4823 scope.go:117] "RemoveContainer" containerID="c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4" Jan 21 18:53:10 crc kubenswrapper[4823]: I0121 18:53:10.977038 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27mr"] Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.016269 4823 scope.go:117] "RemoveContainer" containerID="4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6" Jan 21 18:53:11 crc kubenswrapper[4823]: E0121 18:53:11.016775 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6\": container with ID starting with 4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6 not found: ID does not exist" containerID="4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6" Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.016832 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6"} err="failed to get container status \"4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6\": rpc error: code = NotFound desc = could not find container \"4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6\": container with ID starting with 4885091a3ee022c8885c95838a48354a61a88b92685ceb967fbc270d31f26ed6 not found: ID does not exist" Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.016880 4823 scope.go:117] "RemoveContainer" containerID="17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34" Jan 21 18:53:11 crc kubenswrapper[4823]: E0121 18:53:11.017218 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34\": container with ID starting with 17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34 not found: ID does not exist" containerID="17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34" Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.017262 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34"} err="failed to get container status \"17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34\": rpc error: code = NotFound desc = could not find container \"17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34\": container with ID starting with 17b17b20f65e33a555e1ac508f80aa202528e862c6512f579444cd813078de34 not found: ID does not exist" Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.017290 4823 scope.go:117] "RemoveContainer" containerID="c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4" Jan 21 18:53:11 crc kubenswrapper[4823]: E0121 18:53:11.017551 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4\": container with ID starting with c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4 not found: ID does not exist" containerID="c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4" Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.017583 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4"} err="failed to get container status \"c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4\": rpc error: code = NotFound desc = could not find container \"c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4\": container with ID starting with c82e7bdd203eb09ba381adf5ec0422da02f6b6c6f5bb1c84571c03a8bf66efa4 not found: ID does not exist" Jan 21 18:53:11 crc kubenswrapper[4823]: I0121 18:53:11.356692 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba8050d-d825-44b7-9935-932f13f919da" path="/var/lib/kubelet/pods/dba8050d-d825-44b7-9935-932f13f919da/volumes" Jan 21 18:53:14 crc kubenswrapper[4823]: I0121 18:53:14.343670 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:53:14 crc kubenswrapper[4823]: E0121 18:53:14.344251 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:53:27 crc kubenswrapper[4823]: I0121 18:53:27.343921 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:53:27 crc kubenswrapper[4823]: E0121 18:53:27.344748 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:53:31 crc kubenswrapper[4823]: I0121 18:53:31.008068 4823 scope.go:117] "RemoveContainer" containerID="cfb0f586c4da4e724fdea0b33425b7ac9765aec60fd61f1cfe45560e5ced566e" Jan 21 18:53:38 crc kubenswrapper[4823]: I0121 18:53:38.344136 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:53:38 crc kubenswrapper[4823]: E0121 18:53:38.345685 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:53:51 crc kubenswrapper[4823]: I0121 18:53:51.343362 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:53:51 crc kubenswrapper[4823]: E0121 18:53:51.344079 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:54:06 crc kubenswrapper[4823]: I0121 18:54:06.343729 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:54:06 crc kubenswrapper[4823]: E0121 18:54:06.345703 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.731121 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-77brp"] Jan 21 18:54:12 crc kubenswrapper[4823]: E0121 18:54:12.732563 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="registry-server" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.732582 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="registry-server" Jan 21 18:54:12 crc kubenswrapper[4823]: E0121 18:54:12.732604 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="extract-content" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.732614 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="extract-content" Jan 21 18:54:12 crc kubenswrapper[4823]: E0121 18:54:12.732637 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="extract-utilities" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.732646 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="extract-utilities" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.732933 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba8050d-d825-44b7-9935-932f13f919da" containerName="registry-server" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.734799 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.766363 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77brp"] Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.858401 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-utilities\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.858456 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7zd\" (UniqueName: \"kubernetes.io/projected/15e567ab-bdbd-4749-989b-e96147224114-kube-api-access-8m7zd\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.858528 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-catalog-content\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.960811 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-utilities\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.960875 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7zd\" (UniqueName: \"kubernetes.io/projected/15e567ab-bdbd-4749-989b-e96147224114-kube-api-access-8m7zd\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.960923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-catalog-content\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.961429 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-utilities\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.961456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-catalog-content\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:12 crc kubenswrapper[4823]: I0121 18:54:12.988411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7zd\" (UniqueName: \"kubernetes.io/projected/15e567ab-bdbd-4749-989b-e96147224114-kube-api-access-8m7zd\") pod \"redhat-operators-77brp\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:13 crc kubenswrapper[4823]: I0121 18:54:13.097514 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:13 crc kubenswrapper[4823]: I0121 18:54:13.572962 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77brp"] Jan 21 18:54:14 crc kubenswrapper[4823]: I0121 18:54:14.533116 4823 generic.go:334] "Generic (PLEG): container finished" podID="15e567ab-bdbd-4749-989b-e96147224114" containerID="3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295" exitCode=0 Jan 21 18:54:14 crc kubenswrapper[4823]: I0121 18:54:14.533173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerDied","Data":"3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295"} Jan 21 18:54:14 crc kubenswrapper[4823]: I0121 18:54:14.533776 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerStarted","Data":"624cdc4f47694bd99e48fb7c785ca7aa7e0c8963c61917a323b236a2e36614a1"} Jan 21 18:54:15 crc kubenswrapper[4823]: I0121 18:54:15.545437 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerStarted","Data":"80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737"} Jan 21 18:54:16 crc kubenswrapper[4823]: I0121 18:54:16.556738 4823 generic.go:334] "Generic (PLEG): container finished" podID="15e567ab-bdbd-4749-989b-e96147224114" containerID="80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737" exitCode=0 Jan 21 18:54:16 crc kubenswrapper[4823]: I0121 18:54:16.556803 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerDied","Data":"80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737"} Jan 21 18:54:17 crc kubenswrapper[4823]: I0121 18:54:17.569638 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerStarted","Data":"bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3"} Jan 21 18:54:17 crc kubenswrapper[4823]: I0121 18:54:17.586461 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-77brp" podStartSLOduration=3.136752187 podStartE2EDuration="5.586445594s" podCreationTimestamp="2026-01-21 18:54:12 +0000 UTC" firstStartedPulling="2026-01-21 18:54:14.535073217 +0000 UTC m=+5855.461204077" lastFinishedPulling="2026-01-21 18:54:16.984766624 +0000 UTC m=+5857.910897484" observedRunningTime="2026-01-21 18:54:17.585409659 +0000 UTC m=+5858.511540519" watchObservedRunningTime="2026-01-21 18:54:17.586445594 +0000 UTC m=+5858.512576454" Jan 21 18:54:20 crc kubenswrapper[4823]: I0121 18:54:20.343424 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:54:20 crc kubenswrapper[4823]: E0121 18:54:20.344141 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:54:23 crc kubenswrapper[4823]: I0121 18:54:23.097988 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:23 crc kubenswrapper[4823]: I0121 18:54:23.098287 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:23 crc kubenswrapper[4823]: I0121 18:54:23.144494 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:23 crc kubenswrapper[4823]: I0121 18:54:23.667926 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:23 crc kubenswrapper[4823]: I0121 18:54:23.726326 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-77brp"] Jan 21 18:54:25 crc kubenswrapper[4823]: I0121 18:54:25.638095 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-77brp" podUID="15e567ab-bdbd-4749-989b-e96147224114" containerName="registry-server" containerID="cri-o://bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3" gracePeriod=2 Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.256154 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.318920 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-utilities\") pod \"15e567ab-bdbd-4749-989b-e96147224114\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.319191 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-catalog-content\") pod \"15e567ab-bdbd-4749-989b-e96147224114\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.319441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m7zd\" (UniqueName: \"kubernetes.io/projected/15e567ab-bdbd-4749-989b-e96147224114-kube-api-access-8m7zd\") pod \"15e567ab-bdbd-4749-989b-e96147224114\" (UID: \"15e567ab-bdbd-4749-989b-e96147224114\") " Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.320213 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-utilities" (OuterVolumeSpecName: "utilities") pod "15e567ab-bdbd-4749-989b-e96147224114" (UID: "15e567ab-bdbd-4749-989b-e96147224114"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.328072 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e567ab-bdbd-4749-989b-e96147224114-kube-api-access-8m7zd" (OuterVolumeSpecName: "kube-api-access-8m7zd") pod "15e567ab-bdbd-4749-989b-e96147224114" (UID: "15e567ab-bdbd-4749-989b-e96147224114"). InnerVolumeSpecName "kube-api-access-8m7zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.423739 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m7zd\" (UniqueName: \"kubernetes.io/projected/15e567ab-bdbd-4749-989b-e96147224114-kube-api-access-8m7zd\") on node \"crc\" DevicePath \"\"" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.423794 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.651334 4823 generic.go:334] "Generic (PLEG): container finished" podID="15e567ab-bdbd-4749-989b-e96147224114" containerID="bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3" exitCode=0 Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.651406 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerDied","Data":"bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3"} Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.651547 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77brp" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.652620 4823 scope.go:117] "RemoveContainer" containerID="bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.652602 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77brp" event={"ID":"15e567ab-bdbd-4749-989b-e96147224114","Type":"ContainerDied","Data":"624cdc4f47694bd99e48fb7c785ca7aa7e0c8963c61917a323b236a2e36614a1"} Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.672268 4823 scope.go:117] "RemoveContainer" containerID="80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.698044 4823 scope.go:117] "RemoveContainer" containerID="3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.753669 4823 scope.go:117] "RemoveContainer" containerID="bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3" Jan 21 18:54:26 crc kubenswrapper[4823]: E0121 18:54:26.754129 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3\": container with ID starting with bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3 not found: ID does not exist" containerID="bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.754240 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3"} err="failed to get container status \"bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3\": rpc error: code = NotFound desc = could not find container \"bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3\": container with ID starting with bb7a78ab29c8e713056f63a4e8cba2b3b7f3cfdbeb3efa0c50fd61e4b277e4e3 not found: ID does not exist" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.754335 4823 scope.go:117] "RemoveContainer" containerID="80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737" Jan 21 18:54:26 crc kubenswrapper[4823]: E0121 18:54:26.754846 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737\": container with ID starting with 80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737 not found: ID does not exist" containerID="80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.754884 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737"} err="failed to get container status \"80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737\": rpc error: code = NotFound desc = could not find container \"80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737\": container with ID starting with 80e6ed769556e1ba5f861024f65fd3414af49e275f167834cfaac347ac86d737 not found: ID does not exist" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.754897 4823 scope.go:117] "RemoveContainer" containerID="3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295" Jan 21 18:54:26 crc kubenswrapper[4823]: E0121 18:54:26.755305 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295\": container with ID starting with 3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295 not found: ID does not exist" containerID="3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295" Jan 21 18:54:26 crc kubenswrapper[4823]: I0121 18:54:26.755337 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295"} err="failed to get container status \"3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295\": rpc error: code = NotFound desc = could not find container \"3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295\": container with ID starting with 3a9557d2bccad6d6b7e82f938c8aa0013054ea61342385b6ef6035e7d15a2295 not found: ID does not exist" Jan 21 18:54:28 crc kubenswrapper[4823]: I0121 18:54:28.476083 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15e567ab-bdbd-4749-989b-e96147224114" (UID: "15e567ab-bdbd-4749-989b-e96147224114"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 18:54:28 crc kubenswrapper[4823]: I0121 18:54:28.568112 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e567ab-bdbd-4749-989b-e96147224114-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:54:28 crc kubenswrapper[4823]: I0121 18:54:28.789527 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-77brp"] Jan 21 18:54:28 crc kubenswrapper[4823]: I0121 18:54:28.802051 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-77brp"] Jan 21 18:54:29 crc kubenswrapper[4823]: I0121 18:54:29.355290 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e567ab-bdbd-4749-989b-e96147224114" path="/var/lib/kubelet/pods/15e567ab-bdbd-4749-989b-e96147224114/volumes" Jan 21 18:54:31 crc kubenswrapper[4823]: I0121 18:54:31.133989 4823 scope.go:117] "RemoveContainer" containerID="0d0a46c58befdd25e6db33a66f1b548a247c4fded0c16cd9b1e3e4aa9212b42b" Jan 21 18:54:31 crc kubenswrapper[4823]: I0121 18:54:31.158754 4823 scope.go:117] "RemoveContainer" containerID="af59006ea5cc96f6bb68b3fec59253270fd7441e4d8a3df0fa49378627f39c45" Jan 21 18:54:35 crc kubenswrapper[4823]: I0121 18:54:35.344083 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:54:35 crc kubenswrapper[4823]: E0121 18:54:35.344861 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:54:49 crc kubenswrapper[4823]: I0121 18:54:49.352959 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:54:49 crc kubenswrapper[4823]: E0121 18:54:49.353998 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:55:01 crc kubenswrapper[4823]: I0121 18:55:01.344328 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:55:01 crc kubenswrapper[4823]: E0121 18:55:01.346326 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:55:13 crc kubenswrapper[4823]: I0121 18:55:13.344797 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:55:13 crc kubenswrapper[4823]: E0121 18:55:13.345738 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:55:28 crc kubenswrapper[4823]: I0121 18:55:28.343837 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:55:28 crc kubenswrapper[4823]: E0121 18:55:28.344610 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:55:39 crc kubenswrapper[4823]: I0121 18:55:39.350200 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:55:39 crc kubenswrapper[4823]: E0121 18:55:39.351206 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:55:50 crc kubenswrapper[4823]: I0121 18:55:50.344683 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:55:50 crc kubenswrapper[4823]: E0121 18:55:50.345502 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:56:02 crc kubenswrapper[4823]: I0121 18:56:02.344588 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:56:02 crc kubenswrapper[4823]: E0121 18:56:02.345337 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:56:14 crc kubenswrapper[4823]: I0121 18:56:14.344794 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:56:14 crc kubenswrapper[4823]: E0121 18:56:14.345591 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4m4vw_openshift-machine-config-operator(7aedcad4-c5da-40a2-a783-ce9096a63c6e)\"" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" podUID="7aedcad4-c5da-40a2-a783-ce9096a63c6e" Jan 21 18:56:28 crc kubenswrapper[4823]: I0121 18:56:28.343459 4823 scope.go:117] "RemoveContainer" containerID="049ca02112be8f278e4cf1eab8d210440fa3662673d9dc621b37a198ff0b0668" Jan 21 18:56:28 crc kubenswrapper[4823]: I0121 18:56:28.799529 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4m4vw" event={"ID":"7aedcad4-c5da-40a2-a783-ce9096a63c6e","Type":"ContainerStarted","Data":"c698e4619265065102288fcad87309e0bc1114f187c2b32a470ea9ed1c46c5c1"}